fulltext
stringlengths 0
6.01M
|
|---|
although this group comprises 13% of the U.S. population; this is contrasted with the white population, who make up 76% of the U.S. yet accounted for 45% of these hospitalizations (Aubrey, 2020) . The Hispanic and Latino communities are also harder hit by the outbreak. For instance, in New York City, 34% of the New Yorkers who have died of COVID-19 are Latino, despite making up 29% of its total population (Mays & Newman, 2020) . This alarming disparity in COVID-19 mortality rates cannot be surprising considering that chronic health conditions, such as heart failure, asthma, hypertension, diabetes mellitus and HIV, are at much higher rates, begin earlier and are treated later in the black population community than in the white population (Williams et al., 2010) .
Native Americans, especially those who live in rural areas and reservations such as the Navajo Nation, have been especially hard hit from pandemics: NBC News calls the Navajo Nation's health services, managed by the U.S. Indian Health Services (IHS), "limited,"(Abou-Sabe et al., 2020, para. 1). A quarter, or 25% of the Diné population died from the "Spanish flu" of 1918; During the swine flu epidemic, Dinė or Navajo people, died from that flu at a rate "4-5 times higher than other Americans"; currently, the Nation's COVID-19 infection rate is "ten times higher per capita than its neighboring state, Arizona" (Lange, 2020, para. 4) home, it serves as a safety net for its patients and many community members. Like many other health care organizations, it experiences delays in receiving needed testing supplies for COVID-19 however this testing is currently underway. Clinical services however quickly transitioned to enable access to care for patients. Telehealth for clinical services for primary care, dental and behavioral health were rolled out after its partner, FPCN obtained the necessary approvals from insurance companies and department of state, bureau of professional and occupational affairs licensing boards. Specifically, psychiatric services in primary care including integrated behavioral health consultants and the psychiatric nurse practitioner were able to maintain their transdisciplinary team orientation with all providers in primary care to attend to the needs of patients across the lifespan. Likewise, the larger behavioral health department shifted to operating at full capacity using telehealth including HIPPA compliant Zoom. On average, 95% of sessions are now virtual, therefore, routine van services were placed on hold; however increased delivery drivers were hired for the health centers pharmacy which delivers medications for free within a 2-hour time frame to any patient living in Philadelphia. Similarly, its mind-body services for both patients, community members and staff were all converted to virtual sessions including programming for fitness, yoga, and mindfulness meditation. Additionally, Zumba and Pound classes were offered to staff. These services are integral to mitigating and managing stress and promote engagement with others. Social services have also enhanced their work at the health The goal is to develop "hyperlocal" (neighborhood level) strategies to slow the spread of coronavirus and improve health outcomes for residents of those communities hardest hit by this crisis. These communities have struggled with limited access to healthcare services and insufficient primary care providers, long standing unemployment, a dearth of businesses in the community, chronic illnesses, and a panoply of health risks and stressors that contribute to the pronounced lifespan gap between white and black Chicago residents (Pratt, 2020) The panel consists of not only the usual business leaders and healthcare experts, but a representative from NAMI and community advocates such as Mr. Anton Seals Jr., whose words are quoted at the beginning of this piece. As Mayor Lightfoot said, "This cannot be temporary scaffolding. It's got to be laying a foundation for a permanent fix to many of the problems that for too long we have ignored or said, they are too big to solve" (Pratt, 2020, p. 4) .
We will be watching what happens to the Chicago panel and its work after the pandemic is over. We hope the work of the panel and these other models will be used as best practice guides to change the economic maps and social injustices that foster these health disparities still, especially for the black and brown racialized populations in this country. The new normal they can create should include better distribution of resources, such as clean water, more primary care providers, telehealth and telecounseling services, to people who need them…wherever they live.
This should not be a matter of "if you pay, then you can play", but a best practices model of incorporating the major social determinants of health into assessment and treatment services for all needing health services, now during the pandemic and afterward.
Inequities in health are fueled by cultural and societal norms based on racism and racist practices in the United States particularly during the COVID-19 pandemic, and will be evident in J o u r n a l P r e -p r o o f its aftermath. Health equity, a process which assures conditions of optimal health for all peoples, requires needed attention to the social determinants of health: economic stability, education, community services and safety, healthcare, respectful communications, and affordable decent housing, for all populations. Racial diversity has greatly increased in the U.S., and the needs of our diverse population should inform health care practices and policy making in order to preclude disparities in these determinants, as well as the healthcare inequities that persist most pointedly along racial lines. But knowing this is not enough: doing is what is called for.
|
The Covid-19 pandemic in India is a part of the world Covid-19 which is caused due to severe acute respiratory syndrome coronavirus 2. Unfortunately India currently reports the highest number of cases in Asia. However the fatality rate is relative lower (2.8%) as compared to the world( 6.1%) as of 3rd June 2020. On 12th January 2020 the World Health Organization (WHO) declared the novel coronavirus was responsible for the respiratory illness of a community of people in Wuhan, China. First confirmed case in India was reported on 30th January and the first confirmed death was reported on 12th March. Eventually all Indians were addressed by the Government of India to maintain social distancing as a preventive measure. People were supposed to follow this till 31st March, however before that only the nationwide lockdown was announced. Only essential services were kept open. Major cities and some state made wearing face masks compulsory. Central armed forces came into action. Helpline numbers was set up.
Various research agencies along with Government of India took the initiative to gather data on Covid-19 and a database was setup that included real time data of number of confirmed cases, deaths, recovery date cases based on age, gender, geographical location, and position of India with respect to other countries. Also data was collected that included information on the number of diagnostic tests that were happening at a state and district level. The nation was divided into three zones namely a) Red zones (Hotspots) b) Orange zones (non-hotspots) and c) . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review)
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint Green zone (districts without confirm cases for three consecutive weeks ). The nation witnessed huge economic downfall in these months. Thousands of people lost jobs. Retail sector became the biggest casualty of lockdown. Also the tourism, hospitality and aviation sector faced use losses. The pandemic allowed near to zero inflows of tourists and visitors. This largely impacted travel agents, hotel and aviation. According to sources, at least 4200 companies across the country have been forced to shutdown. Global supply chain has been heavily disturbed.
According to data reported by Centre for Monitoring the Indian economy (CMIE) the unemployment rate had just reached to 30% in urban and 21% in rural areas bringing the total unemployment rate of the country to 23.8% . However there are also some potential winners in this lockdown. Video conferencing apps witnessed its biggest gains as online meetings become the key to existing business. We can also count to medical supply sector, Information and Communication Technology (ICT) supermarkets and e-commerce sector as other potential winners. Not to forget telecom sector has also reported increased demand to internet and voice services in the lockdown period. The ministry of Electronics and IT has successfully launched an application called Aarogya Setu. Its wonderfully designed database provides necessary information about contact tracing. The government has also announced an amount of RS 20.97 crore package which will be infused in several sectors. This package accounts nearly 10% of GDP of the country. Researchers are not sure where the situation will lead us to. There is a lot of uncertainty in every predictive model given by data scientists. As of now, all what we can do is stay at home, follow the instructions and build a strong immunity system within our body.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review)
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint
There are a lot of research papers published that are related to covid-19. Some of them to name can be research work related to vaccine or other medical drugs that can help to recover. They used regression models for forecasting. According to them, expected cases may rise to about 5000 in a two week time period. This was far more accurate however actual scenario showed a bigger upsurge. This research paper can also be of help to several other sectors or other branches of healthcare as immunity power is very related to fighting with Covid-19.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint According to healthcare experts, people having a less developed immunity system are more likely to be a victim of Covid-19. A research paper by Gail Dovey-Pearce , Ruth Hurell , Carl May in the journal of 'Health Social Care in the community', has revealed certain suggestions for providing developmentally appropriate diabetic services and they have focused on the opinion of young adults (16 -25 years) . We all know people suffering from diabetes are more prone to many other diseases with the reduced immunity power. So it's high time we gather information from healthcare workers regarding how to stay fit and healthy during this pandemic. Another captures the trend of cases and also a prediction using Fourier Decomposition Method. This . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint 6 paper also collects data till 1st week of June and predicts the expected number of cases and deaths in the upcoming days.
Visualizations had always been easy to understand the raw data. Here we are going to compare the growth of Covid-19 confirmed, death and recovered cases of India to other major countries that have also been heavily infected. The visualizations are created in Python where using the matplotlib, seaborn, plotly libraries and also datetime library for time series data analysis. Date of India and world is gathered from multiple data sheets from kaggle.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint 8 The above graph supports the fact that either more patients are getting recovered from the disease or more patients are dying due to Covid-19.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint Weekly growth of cases worldwide starting from January 2020 cases considerable increase from week 9 that is March and experience sharp growth.
The visualization is done on a logarithmic scale. It is quite evident that the pandemic has spread . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020.
Machine learning provides an excellent feature of clustering which will help us to categorize countries on the basis of severity of the pandemic. Severity can be measured on several features, here I am considering the mortality and recovery rate of countries. We are using kmeans clustering and hierarchical clustering methods both of which suggests that suitable number of clusters will be 3.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint Therefore considering k= 3, we get the following.
Recovery Rate 0 High Low 1 Low High . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint
We can see countries belonging to cluster 1 are at a comparatively safer zone with low mortality rate and high recovery rate. Countries belonging to cluster 1 are India, Peru, Chile, Pakistan, Bangladesh, USA and Russia.
A gender distribution analysis reveals that males are more likely to be diagnosed with Covid-19
Drilling down to the present scenario of India-data available from various sources reveals that covid-19 reached India at a little later stage. The first case was diagnosed on 30th January and a 2
Medium Medium . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint few cases were reported on February, all of them being students returning from Wuhan,China to Kerala, India.
Age wise distribution of confirmed cases till 19th June 2020 -Calculating the average mortality rate and recovery rate-. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint
The below figure shows that polynomial regression is the best fit for our data. We had also tried with linear regression and support vector machine regression which did not give accurate results. Forecasting using sigmoid model -I am using sigmoid function to forecast the near future scenario in India. Even China's data also resemble does sigmoidal shape.
It was only from mid-march when India witnessed the continuous upsurge of cases, therefore in this forecasting model I am considering data from 16th March 2020.
Fitting sigmoid function in the data -. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020.
Python calculation and overview of the data -. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint
The model predicts maximum active cases at 258846.
The curve flattens by day 154 i.e. 25 th September and after that the curve goes down and the number of active cases eventually will decrease.
There are a lot of research works going on with respect to vaccines, economic dealings, precautions and reduction of Covid-19 cases. However currently we are at a mid-Covid situation. India along with many other countries are still witnessing upsurge in the number of cases at alarming rates on a daily basis. We have not yet reached the peak. Therefore cuff learning and downward growth are also yet to happen. Each day comes out with fresh information and large amount of data. Also there are many other predictive models using machine learning that beyond the scope of this paper. However at the end of the day it is only the precautionary measures we as responsible citizens can take that will help to flatten the curve. We can all join hands together and maintain all rules and regulations strictly. Maintaining . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review)
The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.25.20140004 doi: medRxiv preprint social distancing, taking the lockdown seriously is the only key. This study is based on real time data and will be useful for certain key stakeholders like government officials, healthcare workers to prepare a combat plan along with stringent measures. Also the study will help mathematicians and statisticians to predict outbreak numbers more accurately.
|
Severe acute respiratory syndrome-associated coronavirus-2 (SARS-CoV-2) causes coronavirus disease 2019 (COVID- 19) , which has been considered a pandemic by the World Health Organization (WHO) [1, 2] . Several reports have shown that, similar to other viral pneumonia, the incidence rate of venous thromboembolism (VTE) in COVID-19 patients, particularly those in intensive care, is high [3] [4] [5] [6] . The cause of the hypercoagulation in COVID-19 patients has not been fully understood. Several studies have indicated an increase in some hematologic parameters that may lead to endothelial damage, immobilization-related stasis, and hypercoagulability [7] [8] [9] [10] .
Eosinophils normally make up only a small fraction of circulating leukocytes (1-3%), but their levels can vary in different disease states [11] . Eosinophil levels are clinically important because they are potent pro-inflammatory cells containing cytotoxic proteins and various enzymes (peroxidases, cationic proteins, and neurotoxins) that can influence the effectiveness of heparin [12, 13] .
The International Society on Thrombosis and Hemostasis (ISTH), American Society of Hematology (ASH), and American College of Cardiology (ACC) recommend heparin prophylaxis in patients with COVID-19. However, the dose that can be used has not been clarified. Previous studies have shown that heparin prophylaxis reduces thromboembolic events in COVID-19 patients [14] . However, the efficacy of heparin prophylaxis in COVID-19 patients, as determined by laboratory data, and the factors affecting this efficacy are not known. This study aimed to evaluate the factors influencing anti-factor Xa activity in COVID-19 patients receiving low molecular weight heparin (LMWH) using laboratory data.
After receiving approval from the Ministry of Health and the local ethics committee, we included 80 patients who were found to be COVID-19-positive by polymerase chain reaction (PCR) test in our clinic between May 15, 2020, and June 15, 2020; their written consents were obtained. The patients were followed up clinically by transferring them to the service reserved for COVID-19-positive patients.
Inclusion criteria were as follows: Patients older than 18 years, who were diagnosed with COVID-19 and were administered LMWH, and agreed to participate in the study were included.
Exclusion criteria as follows: Patients with previous coagulopathy, continuous indication of anticoagulant therapy (atrial fibrillation (AF), valve disease), glomerular filtration rate (GFR) < 30 mL/min or undergoing dialysis, or with known liver dysfunction were excluded from the study.
COVID-19 was diagnosed according to the WHO interim guidelines and confirmed in our laboratory by SARS-CoV-2 RNA detection with reverse transcriptase polymerase chain reaction (RT-PCR) using nasal and pharyngeal swab samples [15] .
Patients with a systolic blood pressure (SBP) of ≥ 140 mmHg and/or a diastolic blood pressure (DBP) of ≥ 90 mmHg and those using antihypertensive drugs were considered hypertensive. Patients using oral antidiabetics or insulin or exhibiting fasting blood glucose ≥ 126 mg/dl in two measurements were considered diabetic. Body mass indices (BMI) were calculated according to the following formula: BMI = body weight (kg)/square of the height (m 2 ). GFR was calculated using the Cockcroft-Gault formula: GFR = [(140 −age) × patient weight (kg)]/[72 × serum creatinine value] (× 0.85 for women) [16] .
Demographic characteristics of the hospitalized patients with COVID-19 were recorded, and computerized tomography (CT) of thorax was evaluated. Blood samples were collected to evaluate the hematological, inflammatory, and biochemical parameters of the patients (Fig. 1) . Electrocardiograms (ECG) were recorded and O 2 saturation was determined.
Treatment of patients with LMWH (enoxaparin) was arranged based on the results of laboratory tests, thorax CT, and clinical evaluation. LMWH dosage of 0.5 mg/kg (twice daily) was administered to patients with increased inflammation parameters (CRP (C-reactive protein) > 5 mg/L) and D-dimer levels (> 0.5 μg/mL), as well as pneumonic infiltration in thorax. LMWH dosage of 40 mg (once daily) was administered to the other patients. Other treatments were determined based on the recommendations of infectious disease specialists.
We determined the activity of anti-factor Xa in the blood collected from COVID-19 patients 4 h after the 3rd LMWH dose. An anti-factor Xa activity of < 0.2 IU/mL was defined as subprophylactic [17, 18] . According to previous studies, the threshold of anti-factor Xa activity for thromboembolic prophylaxis was 0.2 IU/mL [17, 18] .
Patients with decreased O 2 saturation and progressing disease state were taken to the intensive care unit. Control antifactor Xa activity in the blood collected 4 h after administering the last LMWH dose before discharge was measured (Fig. 1 ). hematology analyzer used sheath flow impedance, laser scatter, and SF Cube analysis technology. The SF Cube analysis technology is three-dimensional using information from laser light scatter at two angles and fluorescent signals for cell differentiation and counting. In addition, the accuracy of cell numbers was confirmed by peripheral smear from blood samples taken from patients. Biochemical parameters were examined with Cobas C702 (Roche Diagnostics, Mannheim, Germany) device. CRP was examined with BN II nephelometer system (Siemens Healthcare Diagnostics Inc., USA). Ddimer (We used D-dimer unit. The normal range: 0-0.5 μgr/ mL) and fibrinogen levels were examined by the Sysmex CS-5100 device. The activity of anti-factor Xa was measured from the obtained plasma samples using the Berichrom Heparin kit in a Sysmex cs 5100 device in the biochemistry laboratory. The Berichrom Heparin kit is a chromogenic test (Berichrom heparin, Siemens Healthineers, Marburg, Germany). The kit contains AT III reagent. We used LMWH calibrator for calibrating the kit (Berichrom LMWH calibrator). INR (international normalized ratio), PT (prothrombin time), and aPTT (activated partial thromboplastin time) were measured as coagulation parameters. Venous blood samples in coagulation tubes were centrifuged at 5000 rpm for 10 min, and the INR, PT, and aPTT levels were measured in the biochemistry laboratory using a Sysmex cs 5100 device, Dade Actin FS, activated PTT reagent, and thromborel S reagent.
The patients whose general condition was stable, had reduced complaints, and had a decrease in inflammatory parameters were discharged. Patients with D-dimer values above 0.5 μg/ mL during discharge were administered a single dose of LMWH (40 mg, once daily) for 30 days. Patients with lung involvement during hospitalization were given moxifloxacin (400 mg, once daily) or amoxicillin (1000 mg, twice daily) for 1 week at discharge. After discharge, these patients were examined at home by filiation teams (the team monitoring the COVID-19 patients at home) for 14 days.
The data were analyzed using the SPSS 23.0 statistics package (SPSS Inc., Chicago, IL, USA). Continuous variables have been reported as mean ± standard deviation, and categorical variables have been reported as percentages. In comparing the averages between groups, Student's t test was used for variables with a normal distribution, and the Mann-Whitney U test was used for variables without a normal distribution. Categorical variables were compared with the chi-squared test or Fisher's exact test. The sensitivity and specificity of eosinophil to predict subprophylactic levels of anti-factor Xa activity were analyzed by receiver operating characteristic (ROC) analysis. P values < 0.05 were considered significant.
A total of 13 patients with anti-factor Xa activity < 0.2 IU/mL (subprophylactic anticoagulation) were defined as group 1, and 67 patients with anti-factor Xa activity > 0.2 IU/mL (prophylactic anticoagulation) were defined as group 2. When the baseline demographic and laboratory characteristics of the patients in groups 1 and 2 were evaluated, no significant difference was found except for the eosinophil counts and activity of anti-factor Xa ( Table 1 ). Laboratory analysis of the blood collected before the discharge of patients revealed that eosinophil counts in group 1 were higher than in group 2, whereas aPTT and anti-factor Xa activity were lower in group 1 than in group 2 ( Table 2) .
The D-dimer values of 64 patients were < 1 μg/mL, whereas those of 16 patients were > 1 μg/mL. Patients with D-dimer values < 1 μg/mL and those with > 1 μg/mL were found to have similar activity of anti-actor Xa (baseline D-dimer: < 1 ugr/mL, 0.39 ± 0.23 vs > 1 ugr/mL, 0.40 ± 0.22, p = 0.87; before discharge: < 1 μgr/mL, 0.45 ± 0.25 vs > 1 μgr/mL, 0.62 ± 0.30, p = 0.07). However, all 14 patients with eosinophil counts > 4% were in the group with D-dimer levels < 1 μg/mL (p = 0.03). Eosinophil content and numerical values were also significantly higher in the group with D-dimer level < 1 μg/mL (82.97 ± 105.88 vs 15.65 ± 15.35; 1.47 ± 1.84 vs 0.30 ± 0.31, respectively; p = 0.01).
The AUC value in the ROC analysis for baseline eosinophil counts to show subprophylactic anti-factor Xa activity was 0.79 (range: 0.64-0.93; p = 0.001) (Fig. 2) .
Thoracic CTs of the patients were evaluated, identifying 51 patients with infection sings and 29 with no signs of infection in their thorax CT. When patients with and without thoracic CT findings were compared, age, gender, medication, eosinophil percentage > 4%, sediments, crp, fibrinogen, ferritin, AST, ALT, LDH, albumin, HDL, and calcium values were found to be significantly different between the groups (Table 3) . Patients with positive CT findings mainly consisted of older male patients. Acute phase reactants of CT positive patients were found to be higher; however, D-dimer level and anti-Factor Xa activity were similar in both groups (Table 3) . Cavernous sinus thrombosis was observed in one of our patients. The baseline antifactor Xa activity of this patient was 0.19 IU/mL, the eosinophil count was 367, and the eosinophil percentage was 6.8%. The cavernous sinus thrombosis was treated with warfarin. There were no embolic complications in our other patients.
During follow-up, 1 patient died and 2 patients needed intensive care unit follow-up. The average hospitalization period of the patients was 7.55 ± 3.95 days. There was no complication in the patients followed by the filiation teams for 14 days at home, and the general condition of the patients did not deteriorate.
Anti-factor Xa activity was higher in COVID-19 negative patients than in group 1 patients (COVID-19 positive patients) however eosinophil level was similar in these two groups; Anti-factor Xa activity (mean ± SD); COVID-19 negative patients: 0.78 ± 0.53 vs group 1 patients: 0.18 ± 0.06, p = 0.001, eosinophil percent (%) (mean ± SD); COVID-19 negative patients: 2.40 ± 1.28 vs group 1 patients: 2.96 ± 2.55, p = 0.54, eosinophil count (mean ± SD); COVID-19 negative patients: 217.77 ± 151.14 vs group 1 patients: 168.42 ± 147.25, p = 0.45.
In this study, we found that increased eosinophil count is associated with the level of subprophylactic anticoagulation. Eosinophil counts evaluated for adjusting anticoagulation dose were also found to be increased in patients with low Ddimer levels. Patients with lung conditions were found to have increased inflammatory parameters and percentage of eosinophils.
COVID-19 infection has been shown to be associated with increased coagulopathy [14, 19, 20] . In these patients, the Ddimer and fibrinogen levels were increased, but aPTT level was decreased [21] . Local thrombotic events and thromboembolic complications may develop due to endothelial damage and increased coagulable condition due to COVID-19. Anticoagulant therapy reduces mortality and morbidity in COVID-19 patients [19, 20] . Various suggestions have been made about the application of anticoagulant treatment strategy. Various laboratory parameters (D-dimer) and clinical conditions of patients are effective in determining these recommendations [22] . Previous studies examined the anti-factor Xa activity after LMWH administration for VTE prophylaxis, and values below 0.2 IU/mL have been shown to be subprophylactic doses [17, 18] . However, the efficacy and dose of LMWH administered in COVID-19 patients is not clear. It is apparent that levels below the anti-factor Xa values determined in previous studies may increase the risk of VTE in COVID-19 patients, which can cause hypercoagulability. Considering this, subprophylactic anticoagulation in patients was determined in our study by taking the limit value of 0.2 IU/mL. Subprophylactic anticoagulation value was determined in 16.25% patients of the studied patients.
In the patient group with subprophylactic anticoagulation, eosinophil levels were found to be increased. Other demographic and laboratory parameters of patients with prophylactic and subprophylactic levels of anticoagulation were similar.
Eosinophils have pro-inflammatory, pleotropic, and immune regulatory properties. Eosinophils are mainly found in blood, although they are also found in the gastrointestinal tract and lungs. Lung pathology caused by eosinophils has been observed in RSV and SARS-CoV-1 viral infections [23] . Eosinophils may also contribute to the lung pathology in COVID-19 patients. In hypereosinophilic cases, the degranulation of major basic protein from eosinophils and eosinophil peroxidase causes platelet aggregation and thrombus formation [24] . Eosinophils can cause in situ thrombus formation in the lungs and veins. Patients with thoracic CT lesions had high eosinophilic inflammatory parameters. Eosinophils secrete their own chemoattractant molecules (eotaxin and plateletactivating factor) that allow more eosinophils to enter the inflammatory area, increasing inflammation and lung damage.
Enzymes released from eosinophils (peroxidases, cationic proteins, and neurotoxins) may decrease the anticoagulant activity of heparin [25, 26] . In our study, it was found that patients with high eosinophil levels had lower anticoagulant activity. Although D-dimer and fibrinogen levels were similar, patients with low anticoagulant activity only had high eosinophil levels, indicating that subprophylactic anticoagulation levels are related to eosinophils. Eosinophil counts had a good AUC (0.79) in predicting the presence of subprophylactic anticoagulation.
Our patient population was small and many of the patients were followed up for a short period of time (average: 7.5 days). Only 3 patients needed intensive care and one patient died. Therefore, the clinical outcomes of subprophylactic anticoagulation could not be evaluated. Studies involving large-scale, intensive care patients may provide information on the clinical outcomes that eosinophil counts can produce due to their subprophylactic anticoagulation property.
We compared the patients with subprophylactic anticoagulation with ten patients who had not COVID-19 diagnosis. According to the comparison, the eosinophil percent and count were similar; however, the anti-factor Xa activity was significantly lower in patients with COVID-19. These results suggest that eosinophils had more effect on antifactor Xa activity of LMWH in COVID-19 patients. However, these two patient groups had no similar characteristics. The dose of LMWH used is not standard, so that the result should confirm with large clinical trial. The patients who had thoracic CT lesions, more advanced disease, and were older males exhibited higher inflammatory parameters. This has been shown in previous studies [27] .
The small number of patients is the main limitation of our study, although the results of this study can be a guide for the optimization of anticoagulant therapy to decrease mortality and morbidity in COVID-19 patients. This study serves as a guide for future large-scale studies with larger patient groups. Our study groups were not included patients with morbid obese and renal failure; therefore, we need further studies with these patient groups. Besides most of our patients were followed up inpatient clinic, future studies analyzing patients in intensive care units are required.
Increased eosinophil counts in COVID-19 patients were found to be associated with reduced anticoagulant effect of LMWH. Hence, this study can guide to large clinical trials, and the eosinophil levels should be taken into consideration or not while determining the prophylactic anticoagulation strategy in patients with COVID-19.
|
A total of 361 QX sequences were included in the final dataset. Of those, 135 belonged to "Company A" and 226 to "Company B". The sampled farm location is reported in Fig. 1 . Overall, farms were mainly located in the "Pianura Padana" region (central area of Northern Italy) and, to a lesser extent, in North-Eastern, North-Western, Central and Southern Italy. Although the two companies tend to operate in different Italian regions, a clear overlapping was present in the high densely populated poultry area of Northern Italy (Fig. 1 ).
QX-population genetics parameter estimation. All considered field sequences formed a monophyletic group including only Italian strains ( Supplementary Fig. 1 ). TempEst investigation revealed that the positive correlation between genetic divergence and sampling time (i.e. R = 0.335) was suitable for phylogenetic molecular clock analysis 20 .
The tMRCA of the overall QX population in Italy (i.e. QX genotype introduction) was estimated in 2003.52 [ 95HPD: 1999 95HPD: .73-2006 .76] using the structured coalescent approach. Almost identical results were obtained including a third "ghost" deme (i.e. an estimated deme for which no sequences were available, representative of other unsampled companies and farms) in the analysis or using the "traditional" coalescent approach. When strains collected from integrated poultry companies were considered independently, the tMRCA was predicted in 2003. 19 [95HPD: 1994 19 [95HPD: .11-2010 for Company A and in 2010. 6 [2007.26-2011.99 ] for Company B. The viral population dynamics evidenced a substantially constant Ne*t (Effective population size * generation time, or relative genetic diversity) with the remarkable exception of the period between mid-2014 and mid-2015, when a sudden fluctuation was observed. However, a quite different scenario was demonstrated between the two integrated poultry companies. In fact, Company A was featured by a substantially constant population size, with a minor decrease affecting particularly the period 2013-2015. However, the Ne*t 95HPD were relatively broad and at odds with the significance of the observed variations. On the contrary, a much more changeable pattern was observed in Company B (Fig. 2) .
Migration among companies. The structured coalescent model fitted with the two company, evidenced the presence of 2 separate clades (Fig. 3A) for the 2 companies, with only 11 exceptions, represented by strains sampled in Company B but clustering in the Company A clade, thus suggesting the migration of strains from Company A to Company B. Accordingly, the migration rate from Company B to Company A was 2.5*10 −2 [95HPD: 6.02*10 −2 -8.40*10 −2 ], while the one from Company A to Company B was 6.66*10 −2 [95HPD: 2.2*10 −2 -0.12]. The Figure 1 . Location of farms from which samples have been obtained. Different companies have been color coded. Samples collected in Company B but clustering with Company A clade have been colored in red (herein named "Imported"). Farm location has been jittered using an internal routine of ggplot library to guarantee anonymity. The map was generated in R (version 3.4.4), using the library ggmap 51 . Phylogeographic analysis. All the samples phylogenetically belonging to Company A but collected from Company B originated from farms located in the high densely populated area of Northern Italy (Fig. 1) . The continuous phylogeographic analysis reconstructed a spreading pattern originating from a single introduction in Emilia Romagna region (Company A), followed by a progressive expansion and persistence at high level in the Pianura Padana region. More rarely, spreading episodes toward other Italian regions were observed (Fig. 4) . After QX introduction, the infection wave front increased slowly approximatively until 2009, when a rapid expansion led to the final distribution range by the middle of 2012 (Fig. 5) . Accordingly, the dispersal velocity progressively increased in the first years after QX genotype introduction, peaking in the period 2009-2011 and then remaining essentially constant, despite some fluctuations (Fig. 5) . The presence of a high dispersal velocity after 2012, when no further increase in wave front was observed, suggests that IBV continued to circulate at high rate after its first establishment in a region.
The analysis of the effect of different environmental factors on QX genotype dispersal velocity led essentially to negative results (i.e. absence of significant correlation). The only exception was represented by the poultry density www.nature.com/scientificreports www.nature.com/scientificreports/ SL model, which was positively and significantly correlated to viral dispersal velocity: D = 0.0225, percentage of D with p-value < 0.005 = 74%.
Despite the economic relevance, the epidemiology of IBV and the factors affecting its behavior have been only partially investigated. Even if a huge amount of knowledge and literature has accumulated over time, most of the reports are anecdotal or based on the analysis of single clinical outbreaks 17, 21, 22 . Although relevant pieces of information could be obtained, the risk of being biased by personal believes or the particular condition under www.nature.com/scientificreports www.nature.com/scientificreports/ investigation is high. A certain caution is thus required when inferring and extending the same conclusion on a broader/general scale. Moreover, most of the available studies are focused on Avian influenza and, to a lesser extent, Newcastle disease and Infectious laryngotracheitis 17, 23, 24 .
The aim of the present study was to construct an objective and statistically sound framework to understand IBV field strains behavior, the effect of control measures and the factors conditioning their epidemiology. The field of phylodynamics, and all related extensions, provides an invaluable tool for the study of viruses and particularly of rapidly evolving ones, whose evolution can be measured in "real time", over the course of an epidemic 25 .
IBV QX genotype is the most relevant field strain in Italy 26, 27 , and despite a relatively long circulation and the efforts devoted to its control, it still remains one of the main menaces for poultry industry profitability. Therefore, the understanding of the forces shaping its epidemiology would be of remarkable relevance in order to prevent the induced damages, rather than try to control them. Remarkably, the Italian IBV strains appear to originate from one introduction events only, as previously reported 27 . Therefore, it was possible to reconstruct IBV Italian strain evolution and epidemiological pattern without the biasing effect of strains recently introduced from other counties.
The implemented approach allowed to reconstruct the migration history of the QX genotype over time. The estimated introduction, in Emilia Romagna region, shortly predates the first detection, posing in favor of the effectiveness of the Italian monitoring and early detection systems. All the analyses, independently of the underlying statistical model, support that Company A was the first introduction site (Fig. 3) . Thereafter, the virus circulation was limited to farms belonging to this company for years, until approximatively 2010, when Company B became involved. Contextually, a progressive increase in diffusion speed was noticed (Fig. 5) , not unexpectedly considering the rising number of involved farms (especially at the border between Veneto and Lombardy regions, where most farms are located) and thus the increase in spreading potential and opportunity. The high farm density of this area has been described as a risk factor for different infectious diseases 22 , and IBV seems to be no exception. Interestingly, the viral population size remained relatively constant in this time period, evidencing that, even if QX strains were able to effectively spread from farm to farm, their replication was adequately controlled, likely by effective vaccination strategies. Actually, a certain slowdown in dispersal velocity was noticed in 2011-12, potentially because of a progressive decrease in naive populations availability. A dramatic change was observed in 2014, when a new spreading wave (Fig. 4) and an increase in diffusion rate (Fig. 5 ) and population size (Fig. 2) were detected. A more detailed analysis demonstrated that this variation affected Company B only (Fig. 2) . A previous study has ascribed this episode to a change in the vaccination scheme adopted by this company, which moved from a heterologous Mass+793B based vaccination to a Mass only vaccination leading to an increased viral circulation and clinical outbreaks number 4 . Moreover, experimental studies demonstrated a significant reduction in R 0 in vaccinated groups compared to unvaccinated ones 28 . It can therefore be speculated that the increase in infectious pressure within-farm and the higher flock susceptibility to infection could have enhanced the risk of IBV spreading to other farms and regions. In support of this hypothesis, the geographical spreading affected mainly Northern Italian farms (where Company B is located). Moreover, when a new double vaccination was implemented, the decrease in viral population size was mirrored by a reduction of dispersal velocity. www.nature.com/scientificreports www.nature.com/scientificreports/ Continuous phylogeography showed that the areas interested by a more intensive viral circulation were those featured by a higher poultry density, and this evidence was confirmed by a statistically significant correlation between poultry density and dispersal velocity. The association between spatial proximity and farm infection is probably the most consistently reported risk factor for poultry infectious diseases 17, 23, 29 . Although an airborne transmission has been proposed for IBV, its occurrence has rarely been demonstrated experimentally 30 . However, the spatial proximity likely increases the likelihood of a greater number of horizontal contacts between farms, including the movement of people, vehicles and fomites between farms, as well as sharing similar risk factors (e.g. environmental conditions, climate, presence of wild animals, etc.) 16, 23 . Based on these premises, the presence of segregated poultry companies should represent an effective obstacle to viral shedding and the obtained results partially confirm these evidence. The strains from different poultry companies formed two independent clusters, which suggests the effectiveness of independent production flow/chain in protecting farms from exogenous introductions. Additionally, the application of adequate biosecurity measures, enforced also by the Italian legislation, likely contributed in limiting new strain introduction.
The exceptions to this general rule were farms located in the high densely populated poultry area of Northern Italy, where an overlap between the two companies occurs. The unidirectionality of the viral flux from Company A to Company B implies that other factors, besides spatial proximity, must be in place. A detailed survey could shed some insights into relevant factors like different biosecurity measures, structural factors, vaccination strategy etc.. The mediation of other "actors" cannot also be excluded. In fact, the analysis of just two companies, however predominant they are on the Italian poultry sector, cannot be considered an accurate depiction of the Italian situation. Remarkably, the inclusion of a third deme (representative of other unsampled companies and farms) in the analysis model highlighted that several transmission events could be mediated by smaller entities operating in the same region. Actually, the high migration rate estimated between Company B and this ghost deme poses in favor of its pivotal role in maintaining an active IBV circulation.
Even if the idea of modeling demes for which no sequences are available could seem counterintuitive, previous studies showed that the structured coalescent can provide meaningful estimates even in absence of samples from one population 31 and this approach has already been applied and proven effective for other diseases, including Ebola 32 . Since also Company A was evaluated in the same analysis run, the absence of relevant links between this company and the ghost deme further supports the analysis reliability, posing in favor of an actual interaction between Company B and the ghost deme rather than a mere low specificity of the method. A less effective control of IBV infection could be speculated for small companies, whose management capability and resources are limited compared to big-integrated companies. In fact, all Italian farms have to follow national legislation 33 dictating the minimum biosecurity measures to be applied. However, integrated poultry farms, part of major companies, enforce additional managerial practices to increase biosecurity levels. Personnel and veterinarian formation, internal audits and periodic controls guarantee a higher level of application of the required standards, compared to most of small non-integrated farms.
The higher spatial overlap and the likely sharing of some infrastructures (e.g. streets, accessory personnel, services and infrastructures) could nevertheless have a negative indirect effect on the major companies, especially in Northern Italy where Company B is located. However, differences between Company A and Company B in the application of biosecurity measures and production flow management could also explain the different IBV epidemiology, as demonstrated by the dissimilar patterns in viral population fluctuations in the two companies (Fig. 2) . A further risk factor that would deserve further investigation is the presence of the rural sector, which is highly concentrated in the densely populated poultry area of Northern Italy. This sector is characterized by a complex mix of growers, dealers and backyards flocks, often applying poor biosecurity measures and linked together by a poorly traceable contact network 34 . Although interactions with industrial poultry farming is hardly discouraged, illegal/indirect interactions have been documented and multiple epidemiological connections could result in a bidirectional transmission between the two sectors, as demonstrated in the Italian low pathogenicity Avian influenza (AI) outbreaks occurred in 2007-2009 34 . After these episodes, a stricter legislation has been developed, imposing limits to animal movements and more active surveillance in the rural sector. Nevertheless, no measures were taken for the monitoring and control of IBV in these enterprises, and therefore their role as sources of encroachment in intensive farming cannot be excluded.
Other environmental factors do not seem to play a relevant role in affecting viral dispersal. While climatic conditions like temperature, humidity and wind could actually affect viral viability and spreading, their effect could be circumvented by a transmission mediated by "fast-moving" vectors like trucks, personnel and, potentially, wild species 35, 36 . More surprising could be the non-significant role of road density. However, it must be stressed that the available raster reported the overall density of roads, which could significantly differ from those preferentially used for live animal or their byproduct transportation, hindering the detection of an otherwise plausible risk factor. Therefore, the mapping of the live animal transportation pathways could provide remarkable benefits in IBV (and other infectious diseases) epidemiology understanding and control.
The present study demonstrates that IBV spreading potential is mainly affected by farm and poultry density overall, which can be reasonably claimed as a major risk factor. Other environmental/climatic variables do not seem to affect IBV epidemiology, stressing the pivotal role of human action and thus highlighting the direct benefits that could derive from an improved management and organization of the poultry sector on a larger scale. Actually, the integration of poultry production seems to provide a relevant constrain to IBV circulation, even though some differences were noted between the two considered companies. In fact, despite differences in management and applied control strategies likely playing a role, the presence in the same area of other minor poultry companies seems to represent a major issue, probably due to the less effective infection control ascribable to the sometimes lower organization capability and resources of small enterprises. The present study results emphasize the need of an active sharing of sequences and related molecular epidemiology data originating from all the actors in poultry production, allowing a proper depiction of the viral exchange dynamics, based on actual data rather www.nature.com/scientificreports www.nature.com/scientificreports/ than estimations. The obtained information would represent a fundamental substrate for the implementation of effective and shared efforts for the infection control on a broad regional scale.
IBV strain sampling, diagnosis and sequencing. Samples were collected for routine diagnostic purpose in the period 2012-2016 from poultry flocks belonging to the two main poultry companies (here named Company A and Company B) operating in Italy, which account together for about 90% of Italian poultry production. Samples were obtained mainly from outbreaks of respiratory disease, following a standard protocol that enforced the collection of a pool of 10 tracheal swabs from randomly selected birds. For each sampling, collection date and farm localization were recorded. All considered samples had been performed in the context of routine diagnostic activity and no experimental treatments or additional assays were implemented during the study. Therefore, no ethical approval was required to use specimens collected for diagnostic purpose. Additionally, several samples from Company A were already sequenced using the same protocol and published in Franzo et al. 27 . When detailed information on sampling farm and time could be traced back, these samples were included in the study. The permission to use the collected samples for research purpose was obtained from each company.
Swab pools were resuspended in 2 ml of PBS and vortexed. Thereafter, RNA was extracted from 200 µl of the obtained eluate using the High Pure Viral RNA Kit (Roche Diagnostics, Monza, Italy) kit. Diagnosis was performed by amplification and Sanger sequencing of the hypervariable region of the S1 region using the primer pair described by Cavanagh et al. 37 . Obtained chromatograms quality was evaluated using FinchTV (http://www. geospiza.com) and consensus sequences were generated using CromasPro (CromasProVersion 1.5).
Sequence dataset preparation. All obtained sequences plus the reference dataset provided by Valastro et al. (2016) were aligned using MAFFT 38 and a phylogenetic tree was reconstructed using IQ-TREE 39 selecting as the best substitution model the one with the lowest Akaike's information criterion, calculated using Jmodeltest 40 . The strains clustering with the GI-19 lineage (previously known as QX genotype) were selected and further evaluated for the presence of recombination in the considered region using RDP4 41 and GARD 42 : to limit the computational burden the sequences were clustered using a 99% identity threshold using CD-HIT 43 and a single representative sequence for each cluster was selected. These sequences plus the Valastro et al. (2016) references were re-aligned and recombination analysis was performed. Recombinant sequences, including the ones belonging to the same cluster, were removed from the dataset. Finally, the dataset was re-expanded to the original size and sequences identical or closely related (p-distance <0.01) to the QX-based vaccines administered in Italy were also excluded. To evaluate the distribution of Italian GI-19 strains in the international scenario, an extensive dataset of S1 IBV sequences was downloaded from GenBank and a phylogenetic tree was reconstructed as previously described. To reduce computational complexity and increase interpretation easiness (without losing information), only one sequence representative of all identical ones was selected using CD-HIT and included in the analysis.
The presence of an adequate phylogenetic signal was assessed by a likelihood mapping analysis performed with IQ-TREE. TempEst was used to preliminarily evaluate the temporal signal of the Italian QX phylogeny and therefore the applicability of molecular clock-based methods 20 .
Strain migration among integrated poultry companies. IBV QX strain migration among companies was evaluated using the structured coalescent-based approach implemented in the MultiTypeTree extension of BEAST2 44 . According to this model, the considered population is divided in a series of demes, which can be imagined as different islands, featured by their own populations size and interconnected by a certain migration rate among them.
In the particular Italian QX scenario, the serially sampled (i.e. with known collection date) strains were used to infer the migration rate and history between the two integrated poultry companies (i.e. considered as different demes) over time. Additionally, the Bayesian approach implemented in BEAST allowed to contextually estimate other population parameters, including the time to most recent common ancestor (tMRCA), evolutionary rate and population size.
Accounting for the presence of other farms and companies operating in the Italian poultry sector, which could take part in or mediate the viral transmission among the investigated major companies, a third "ghost" deme (a deme for which no sequences were available) was added to the model 31 . The priori of the ghost deme size was set to one tenth of the other demes, according to the estimated poultry population distribution. However, broad priori distribution (i.e. relatively uninformative priori) was chosen to avoid constrains or biases in the parameter posterior estimation.
For all analyses, the best substitution model (TN93 + G 4 ) was selected based on the Bayesian information criterion, calculated using Jmodeltest 40 , while the relaxed lognormal molecular clock model was selected based on marginal likelihood calculation and comparison using the Path Sampling and Stepping Stone method 45 .
The final estimations were obtained performing a 200 million generation Markov chain Monte Carlo run, sampling parameters and trees every twenty thousand generations. Results were visually inspected using Tracer 1.5 and accepted only if mixing and convergence were adequate and the Estimated Sample Size was greater than 200 for all parameters.
Parameter estimation was summarized in terms of mean and 95% Highest Posterior Density (HPD) after the exclusion of a burn-in equal to 20% of the run length. Maximum clade credibility (MCC) trees were constructed and annotated using Treeannotator (BEAST package).
Results consistency was also evaluated performing a "traditional" serial coalescent analysis in BEAST 1.8.4 46 . The same substitution and clock model of the structured coalescent analysis were selected, while a nonparametric skyline population model was chosen to reconstruct the viral population dynamic over time 47 . Independent (2020) 10:7289 | https://doi.org/10.1038/s41598-020-64477-4 www.nature.com/scientificreports www.nature.com/scientificreports/ analysis for each integrated company were also performed using the same approach but generating two new datasets including only the sequences collected from a specific company. However, sequences introduced from one company to the other were excluded from the company-specific analysis since they did not share a common evolution history.
Continuous phylogeography and determinants of IBV spreading. The history of QX dispersal was reconstructed over time using the continuous phyogeographic approach described by Lemey et al., 48 using BEAST 1.8.4. Substitution and clock models were selected as previously described. Similarly, the Gamma Relaxed Random Walk was preferred over the other phylogeographic continuous diffusion models based on the marginal likelihood calculation and comparison using the Path Sampling and Stepping Stone method 45, 48 . The final estimations were obtained performing a 200 million generation Markov chain Monte Carlo run, sampling parameters and trees every twenty thousand generations. Results were visually inspected using Tracer 1.5 and accepted only if mixing and convergence were adequate and the Estimated Sample Size was greater than 200 for all parameters. The reconstruction of QX movements over time within Italian borders was obtained using SpreaD3, summarizing and visualizing the full posterior distribution of trees obtained in continuous phylogeographic analyses 49 .
Pattern and determinants of viral spreading were evaluated as described by (Dellicour et al.) 19 , using the seraphim R library 50 . The history of lineage dispersal was recovered from the posterior trees generated using BEAST and annotated with ancestral longitude and latitude reconstruction. Particularly, the distance, duration and velocity of spatial dispersal were recoded as vectors and used to generate different summary statistics of viral spreading, including dispersal velocity and maximal wave front distances (measured from the location of the tree root).
Several environmental/social variables were considered to determine if they were associated with the dispersal rate of IBV lineages. The environmental rasters describing the variables of are shown in Supplementary Fig. 2 .
More in detail, the values in the raster (i.e. altitude, population density, poultry density, temperature, etc.) were used to associate a weight to the abovementioned vector. Two models of spatial movements were considered: (1) "straight line (SL) path" model, assuming a straight movement between the starting and ending locations of each branch (i.e. the branch weight is computed as the sum of raster cells through which the straight line passes);
(2) "least cost (LC) path" model, using a least cost algorithm (i.e. the branch weight is computed as the sum of the values of cells transition values between adjacent cells along the least-cost path). In this model, the analyzed environmental variable can be considered both as a conductance (i.e. enhancing viral dispersal through the cells with higher values) or resistance factor (i.e. allowing an easier dispersal through cells with lower values). Both instances were evaluated for each considered factor.
The obtained "environmental" weights were used to calculate a regression with the branch duration and the corresponding coefficient of determination (R 2 env ) was obtained. A null coefficient of determination (R 2 null ) was also calculated assuming the null raster (i.e. when only the spatial distance of each movement is assumed to affect branch duration). The statistic D = R 2 env -R 2 null was selected as final outcome, and describes how much the regression is strengthened when the spatial variation in the environmental variable is included. To account for the phylogenetic uncertainness, the D statistic was calculated for each tree of the posterior distribution. However, for computational constraints, the number of posterior trees was down-sampled to 1000 after discharging a 20% burn-in. Only the environmental variables with more than 90% of D statistics > 0 were considered for further analysis. Particularly, the significance of D statistic of those variables was assessed against a D null distribution obtained by randomizing 1000 times the phylogenetic nodes location under the constraint that branch length remained equal. A p-value was generated for each initial tree, therefore a percentage of the trees with p-value < 0.05 could be calculated, which can be interpreted as a posterior probability of observing a significant correlation between lineage movements and considered environmental variable. According to , a percentage of p-value < 0.05 greater than 50% was considered a strong evidence that the environmental variable is associated to viral movement speed 19 .
|
with a positive RT-PCR assay. Clinical data were retrospectively retrieved from the medical records, including clinical features, laboratory findings, imaging, treatment and outcome.
We identified 59 patients with a haematological disease and concomitant COVID-19 infection. Their mean age was 67 years (range 32-92) and 54% were male. Thirty-three patients (56%) had a lymphoid malignancy and 20 patients (34%) suffered from a myeloid malignancy. A relative high incidence of patients (10%) had an idiopathic thrombocytopenic purpura (Table 1) .
Thirty-nine (66%) patients were being treated for their underlying disease at the time of COVID-19 diagnosis ( Table 1 ). The mean duration of symptoms before the diagnosis of COVID-19 was 5.8 days (range 0-34). Eightyeight percent of patients had a community-acquired infection and 54% had metabolic comorbidity (e.g. hypertension, diabetes, obesity or cardiovascular events). The most common presenting symptoms were fever (93%), dyspnoea (62%), dry cough (47%) and diarrhoea (29%). Almost all patients (94%) had CT imaging abnormalities characteristic of COVID-19. The most common radiologic findings were ground glass opacifications. Four patients (7%) had a neutropenia at presentation and 23 (40%) a lymphopenia. The different treatments given for COVID-19 and their outcome are shown in Table 2 .
Seven patients with respiratory failure did not start mechanical ventilation due to the underlying advanced haematological disease. Five patients (8.5%) developed a thrombotic event during follow-up, mostly pulmonary embolisms.
At last follow-up 20 patients (34%) died due to COVID-19. The mortality rate for patients above 60 years was 45%, and that for patients below 60 years was 11%. There was no difference in survival between lymphoid and myeloid malignancies. In addition, we did not observe any difference in survival between the different treatment strategies of COVID-19 infection.
To the best of our knowledge this is the second European series of patients with COVID-19 and a haematological disease [3] . The estimated 1-month overall survival is 71%, which conforms to the survival rate of haematological patients published by Lee et al. and that of other series of patients with a malignancy [2] [3] [4] . It must be noted that like other case series the average age of our series is above 60 years and more than 50% of patients had metabolic comorbidities. In the series of Malard et al. there was an overrepresentation of patients with a multiple myeloma [2] . This could not be confirmed in our multinational cohort, although lymphoid malignancies seem to be more common. In our series 92% of the patients needed to be hospitalised, so our data is biased due to the fact that only patients with severe or critical illness were tested due to the limited availability of test capacity. This is also represented in the presenting symptoms: in the series by Lee et al. 61% had a fever, 47% a dry cough, 39% dyspnoea and 6% diarrhoea; these symptoms were all more frequently present in our series at presentation [3] .
We did not observe any benefit of the given specific treatments for COVID-19. However, for the role of possible interventions in this category of patients, trials with larger, more uniform cohorts or randomised trials need to be conducted.
Overall, patients with a haematological disease seem to be more vulnerable to a more severe course of COVID-19 compared to patients without a malignancy, as already shown in the report by He et al. [1] . Pending a vaccine or treatment for COVID-19, precautions should be taken. Haematology departments should remain a COVID-19-free zone, patients and personnel should strictly comply with hygienic advices and social distancing, and patients and personnel should be tested even upon the mildest symptoms. Because of the expected long duration before normalisation of hospital care, treatment of the underlying disease should be continued when possible.
|
Difficulties exist in the detection of etiologic agents, including M. pneumoniae for lower respiratory tract infections in children (especially younger children) with regard to adequate sampling of respiratory materials for pathogen culture and polymerase chain reaction (PCR), and the need for paired blood sampling for serologic tests. In addition, it is known that in some patients, the diagnostic antibodies are not detected in the early stage of M. pneumoniae infection [1] .
Although M. pneumoniae is a small bacterium that can induce pneumonia, the immunopathogenesis of this agent in humans is poorly understood. Clinical and experimental studies support the hypothesis that lung injury in M. pneumoniae infections is associated with the cell-mediated immunity of the host [7] [8] [9] [10] , including temporary anergy of purified protein derivatives (PPD) [9] and the dramatic beneficial effect of corticosteroids on severe MP in adults and children [7, [10] [11] [12] [13] . Therefore, it is expected that the severity of pulmonary lesions in MP might differ with the age of the patients, and that laboratory findings might differ according to the severity of pneumonia.
In the present study, we used two IgM serologic tests and two examinations at admission and discharge to characterize the clinical features, laboratory findings, and chest radiographic findings in children with MP during a recent epidemic in South Korea.
We retrospectively analyzed the medical records and chest radiographic findings of 191 children with MP who were admitted to The Catholic University of Korea, Daejeon St. Mary's Hospital during a nationwide MP epidemic, from January 2006 through December 2007. A total of 1,083 patients with pneumonia or lower respiratory tract infections were admitted during this period. Among them, we selected patients with MP using two IgM serologic tests: the indirect microparticle agglutinin assay (MAA: Serodia-Myco II, Fujirebio, Japan; positive cutoff value ≥1:40) and the cold agglutinins titer (positive cutoff value ≥1:32). Following parental consent, both assays and some laboratory indices were routinely performed twice: once at the time of admission and once at discharge (mean: 6.0 ± 2.1 days apart). Subjects were selected for inclusion in the study if seroconversion was shown on both assays during admission, or if increased MAA-positive titers (≥4-fold) with corresponding cold agglutinin titers (including seroconversion) were displayed on the second test. Patients who tested positive in both assays at admission, but did not have increased or decreased titers at discharge, were regarded as having recent past infection and were excluded from the study (38 cases). Blood culture for bacterial pathogens was performed for all pneumonia patients. Nasopharyngeal aspirates or sputum for viral agents (influenza viruses type A and B, parainfluenza viruses, respiratory syncytial virus, and adenoviruses) and PCR for M. pneumoniae were examined in the majority of patients (129 patients).
The chest radiographic patterns at admission of patients with MP were divided into two groups. Patients with increased nodular densities along the bronchial trees and/or an interstitial pattern on the unilateral or bilateral lung fields were designated as the bronchopneumonia group. Patients with distinctive subsegmental, segmental or lobar consolidation were designated as the segmental/lobar pneumonia group. The chest radiographic findings were reviewed and classified independently by two pediatricians (KY Lee and YS Youn) and one pediatric radiologist (JC Kim).
We divided the 191 children with MP into three groups according to age: ≤2 years of age (29 patients), 3-5 years of age (81 patients), and ≥6 years of age (81 patients), and into another two groups according to pneumonia pattern: the bronchopneumonia group (96 cases) and the segmental/lobar pneumonia group (95 cases). In addition, the children aged ≥6 years (81 patients) were classified into three groups based on the severity of pneumonia: the bronchopneumonia group (25 patients), the mild segmental/lobar group (33 patients), and the severe segmental/lobar group (23 patients). The mild segmental/lobar pattern was defined as having an area of consolidation in less than one lobe without pleural effusion, while the severe segmental/lobar pattern was defined as having an area of consolidation over one lobe, including multiple lobe involvement and/or any consolidation with pleural effusion.
We also evaluated the pneumonia patients according to diagnostic antibody status. We compared the clinical and laboratory characteristics among the groups. The study was approved by our Institutional Review Board.
Statistical analyses were performed using the Statistical Package for the Social Science for Windows version 12.0 (SPSS, Chicago, IL, USA). Continuous variables are reported as the mean ± standard deviation. Statistical significance was assessed using the χ 2 test for categorical variables, and the independent sample t-test, paired ttest, and one-way analysis of variation (ANOVA) for continuous variables. A p value < 0; 0.05 was considered statistically significant.
The mean age of the subjects was 5.5 ± 3.0 years (range, 9 months-14 years), and the male-to-female ratio was 1:1.1. The age distribution of the patients is shown in figure 1 . The patients had symptoms and signs indicative of pneumonia at the time of admission. All patients had a fever (>38°C per axilla) and cough, and the majority of patients had abnormal breath sounds on auscultation. Of the 191 patients with MP, 86 were seroconverters (i.e., IgM-negative at admission to IgM-positive at discharge) for both assays, and 105 seropositive patients showed increased MAA titers (>4-fold) with corresponding cold agglutinin titers during hospitalization. The median titers of MAA and cold agglutinins in seroconverters at discharge were 1:160 (range, 1:40-1:2,560) and 1:32 (range, 1:16-1:512), respectively. The median titers of both assays in seropositive patients were 1:160 (range, 1:40-1:640) and 1:16 (range, 1:1-1:512) at admission, and 1:640 (range, 1:80-1:10,240) and 1:64 (range, 1:8-1:1,024) at discharge, respectively. PCR assay for M. pneumoniae was performed in 129 patients; 37 patients (28.7%) were positive. No patient showed blood culture positive for bacterial pathogens including Streptococcus pneumoniae. A viral study performed in the same 129 patients revealed that 2 patients were co-infected with respiratory viruses (respiratory syncytial virus and parainfluenza virus A). Extrapulmonary manifestations of M. pneumoniae were observed as 19 cases of skin rash, 9 cases of abnormal hepatic enzymes (AST and ALT, >2-fold of normal values), and 1 case of encephalopathy.
We observed significant differences in certain parameters among the three age groups ( Table 1 ). The total duration of fever tended to be longer, and the frequency of segmental/lobar pneumonia was significantly higher in patients ≥6 years of age compared with the younger groups (69.1% vs. 40.7% vs. 20.7%, respectively, p < 0.001). The patients who had fever lasting ≥7 days were also more strongly affiliated with the older groups (p = 0.04). Laboratory findings revealed that white blood cell (WBC) count, lymphocyte differential, and platelet count were lower in the group of patients aged ≥6 years; however, Creactive protein (CRP) values were higher in this group (Table 1) . Compared with at admission, at discharge the MP patients had significantly increased levels of lymphocyte differential (44.3% vs. 28.8%, respectively; p = 0.001), total IgG (921 vs. 893 mg/dL, respectively; p = 0.01) and platelet count (371,000/μL vs. 262,000/μL, respectively; p < 0.001) (data not shown).
The mean age of the bronchopneumonia group was 4.6 ± 2.5 years, significantly younger than that of the segmental/lobar group (6.6 ± 3.0 years; p < 0.001). The total duration of fever (p = 0.02) and the length of hospitalization (p = 0.001) were longer in the segmental/lobar group than in the bronchopneumonia group. Compared with the bronchopneumonia group, the segmental/lobar group recorded lower WBC count (p = 0.04), absolute lymphocyte count (1,900 ± 1,300/μL vs. 2,700 ± 1,700/μL, respectively; p < 0.001), and platelet count (p = 0.02), but higher CRP (2.1 ± 2.3 vs. 5.1 ± 5.3 mg/dL, respectively; p < 0.001) ( Table 2) .
Because patient age may be related to the severity of pneumonia and the levels of laboratory indices (including WBC count and differential), we analyzed the clinical and laboratory parameters of the subgroup of 81 older children aged ≥6 years of age. When we divided and evaluated these school-aged children into three groups according to severity of pneumonia, as previously stated, the patients with more severe pulmonary lesions showed higher CRP levels (p = 0.02) and a higher proportion of seroconverters (p = 0.001, ANOVA test). The duration of fever and absolute lymphocyte count were significantly different between the bronchopneumonia group and the severe segmental/lobar pneumonia group (χ 2 test and independent sample t-test) ( Table 3) .
Because the proportion of seroconverters tended to be higher in patients who had more severe pneumonia (Tables 2 and 3) , we evaluated the clinical and laboratory findings of the subjects according to diagnostic antibody status: those who were seroconverted (n = 86) and those who had increased titers (n = 105). There were no differences between the two groups in clinical and laboratory indices, except for platelet count (240,000 ± 84,000/μL vs. 289,000 ± 90,000/μL, respectively; p < 0.001) (data not shown).
The epidemiologic characteristics of M. pneumoniae may differ among populations [8] . Although earlier studies in Western populations reported that the incidence of MP is greatest among school-aged children [1] , the age distribution of all patients with MP in the present study was between 9 months and 14 years, with peak incidence at 4-6 years of age (figure 1). We found that MP affects children of all age groups, whereas the clinical phenotype of MP differs with age. Compared with the younger children, the older children had a more severe clinical course, manifested by longer total duration of fever, higher CRP, and a more severe pneumonia pattern. Recent clinical studies have also reported that some clinical features differ between younger children (≤5 years of age) and older children (>6 years of age) [4, 5] . Clinical features such as tachypnea, upper respiratory symptoms (coryza), and gastrointestinal symptoms (diarrhea and vomiting) were shown to be more common in younger children [4, 5] , and the rate of chest radiographic consolidation was higher in older children [5] .
Although variation in WBC count is known to be a nonspecific finding in M. pneumonia infections, we found that lymphopenia may be one of the characteristics of MP in the acute stage. The mean WBC counts of children with MP at presentation were similar to the normal values for same-age references [14] ; however, lymphocyte differential and absolute lymphocyte counts were decreased. In addition, the severity of MP tended to be inversely associated with lymphocyte counts. Studies of adult patients with MP also indicated constant leukopenia with lymphopenia [10, 15, 16] .
Recent clinical studies reported that thrombocytosis was observed in some patients with MP [4, 5] . We also found that the platelet counts of patients with MP increased significantly in the convalescent stage, with thrombocytosis (>400,000/μL) observed in 8% of patients at admission but in 33% of patients at discharge. Thus, the degree of platelet counts in MP may be associated with the stage of inflammation and age of the patient.
Although this finding may be an epiphenomenon that follows various infections, it is not yet known whether the phenomenon is also observed in viral or other pathogeninduced pneumonias.
Approximately 45% of patients in the present study were seroconverters. This finding verified the observation that many patients are IgM sero-negative at presentation with MP. Absence of diagnostic IgM antibodies in the early stage of systemic infections has been well documented in previous studies of adults and children with MP [17, 18] and other infections, including severe acute respiratory syndrome (SARS) due to coronavirus and measles [19, 20] . Ozaki et al. reported that 31.8% of children with MP were IgM-positive at admission when tested using an EIA (ImmunoCard), but that 88.6% of patients were IgM-positive when tested using paired sera (mean of 8.0 ± 3.0 days apart) [18] . With these findings, because some patients may be false-positives (recent past infection), especially in younger children who may be res- [21, 22] , the diagnosis of MP based on a single assay for IgM or a PCR without serologic tests is inadequate for patient selection.
Interestingly, among those aged ≥6 years, the group with more severe pneumonia had a greater number of seroconverters; i.e., patients with more severe pulmonary lesions may be more likely to be sero-negative at presentation. Because M. pneumoniae infection is controlled by the adaptive immune reaction of the host, including antibodies, patients with severe pneumonia may remain IgMnegative longer in the early stage of MP. Indeed, the three patients in the present series who had the most severe clinical course were seroconverted at the third examination after 1 week.
The detection of cold agglutinins (IgM) is nonspecific for M. pneumoniae infections; however, the titer of cold agglutinins in other systemic infections (such as Epstein-Barr virus and adenovirus infections) is rarely ≥1:64, except in M. pneumoniae infections [23] . It is reported that detection of cold agglutinins results in a higher sensitivity and specificity for diagnosis of M. pneumoniae infections when compared with a serologic test for M. pneumoniae [24, 25] . According to the patient selection policy employed in this study, nearly all patients among the seroconverters were seroconverted in both serologic assays; among the patients who were seroposive at presentation, 87% showed increased antibody titers ≥4-fold in both serologic assays. Of the 1,083 patients with pneumonia or lower respiratory tract infections during this study period, only 5 showed cold agglutinin changes (≤1:64) without MAA changes. The detection rate of PCR for M. pneumoniae in the present study (29%) was lower than those in previous studies of children [26] . We referred our respiratory samples to an external laboratory for PCR and detection of viral antigens. Inconsistent sample delivery times, inadequate dilution of sample volume, and other undetermined factors may (in part) have affected our results.
It is recently reported that co-infection of M. pneumoniae with other bacterial and/or viral pathogens is not rare [2, 27, 28] . In the present study, two cases of M. pneumoniae pneumonia were co-infected with viral infections and there were no positive cases of bacterial blood culture. Because we did not perform extensive microbiological testing of all the subjects, we cannot exclude the possibility that some children might have had co-infection with other bacterial or viral pathogens. However, it is assumed that our methodology (two serologic tests and two examination times) would have reduced patientselection bias as much as possible. The clinical implications of mixed infections, compared with a sole agent, remain unresolved [28] .
All children in this study were treated with amoxicillin with clavulanate and a macrolide (clarithromycin or roxithromycin); 75% of the patients defervesced within 2 days and 83% of patients defervesced within 3 days after initiation of antibiotic treatment. Of the patients with a fever duration >4 days, 14 who were non-responsive to antibiotics and had progressive pneumonia received additional prednisolone treatment (1 mg/kg/day for 3-4 days, tapering within 1 week); these patients showed rapid improvement of clinical and radiographic findings, as previously observed [12] . The beneficial effects of systemic corticosteroids on severe or fatal MP have been well documented in children, adults, and experimental animals [10] [11] [12] [13] 29] . A recent article reviewed clinical, experimental, and pathologic studies for the notion of a host immune response including cell-mediated immunity in M. pneumonia infections [8] .
Because the present study was performed during a recent nationwide epidemic and the subjects were all inpatients, our results might not reflect the exact epidemiologic characteristics of M. pneumonia infections. Clinicians can encounter intermittent endemic cases prior to an epidemic with a 3-5 yr cycle, and our results may help to prepare for coming MP epidemics and to understand the clinical characteristics of MP.
The clinical phenotype of MP differs with age, with a longer period of fever, higher CRP, and more severe pulmonary lesions observed in older children. The severity of pulmonary lesions was associated with lower lymphocyte count and higher sero-negativity of diagnostic IgM antibodies at presentation. Short-term paired IgM serologic test in the acute stage may assist in obtaining an early and definitive diagnosis of MP and reduce bias in patient selection. Further studies are required into the pathogenesis of M. pneumoniae infection.
|
The novel coronavirus (COVID- 19) was first identified in Wuhan, China, in December 2019 among a cluster of patients that presented with an unidentified form of viral pneumonia with shared history of visiting the Huanan seafood market. 1 Patients were assessed for viral pneumonia through the ascertainment and testing of bronchoalveolarlavage fluid utilizing whole genome sequencing, cell cultures and polymerase chain reaction (PCR). The virus was isolated from biologic samples and identified as genus betacoronavirus, placing it alongside other Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS). 1 At the time of writing, the number of persons infected by the virus has now surpassed 67 091 and Chinese authorities have reported 1527 deaths from the virus, most in Hubei, the provincial epicenter of the outbreak. 2 Over 25 countries have confirmed cases to date, including countries from Asia, Europe, North America and the Middle East (see Figure 1 ). 2 The virus spread internationally within 1 month of the first identification, and can be transmitted via close human-tohuman contact. 3 The World Health Organization (WHO) declared COVID-19 a Public Health Emergency of International Concern as of 1 February 2020.
Another beta-coronavirus was first identified in Southern China (Guangdong province) in November 2002. The WHO did not receive an update from the Chinese government until the end of March, with 792 cases and 31 deaths reported. The lack of transparency of the Chinese health ministry has been cited as one of the largest contributors to the spread of the virus globally. 4 At the end of the epidemic, China reported >8,000 cases of the disease and 774 deaths, and a case-fatality rate of 7%. 5 The reservoir host of the disease was thought to be the Asian civet cat (Paguma larvata). The foci of transmission from host to human were thought to be the open markets, much like the COVID-19 outbreak currently ongoing. 5 The SARS global outbreak was contained in July 2003 and since 2004 there have not been any known cases of SARS reported. 6 After the emergence of SARS, MERS was the second coronavirus resulting in a major global public health crisis. It first emerged in 2012 in Saudi Arabia when a 60 yearold man presented with severe pneumonia. 7 An outbreak of the virus did not occur until 2 years later, in 2014, with a total number of identified cases of 662 and a 32.97% case-fatality rate. 8 From 2014 to 2016, 1364 cases were observed in Saudi Arabia. A total of 27 countries were affected by MERS during the outbreaks spanning Europe, Asia, the Middle East and North America. Cases that were identified outside of the Middle East, including the outbreak in South Korea in which 186 individuals were infected as a result of a super spreader, were transplanted individuals that had previously been infected in the Middle East. 9 Since 2012, 2494 laboratory confirmed cases of MERS have been reported, and 858 associated deaths have occurred (34.4% case-fatality ratio). 8, 10 Key Messages • Inadequate risk assessment by the Chinese government hampered efforts to contain the virus.
• The current novel coronavirus (COVID-19) has surpassed Severe Acute Respiratory Syndrome (SARS) in the number of cases and deaths from the disease.
• Closure of the live-animal markets in China may decrease the likelihood of another zoonotic outbreak occurring.
• Human-to-human transmission has been confirmed, and although several measures have been taken to mitigate the virus' spread, travel to impacted regions should be avoided if possible.
The objectives of our study are to provide an overview of the three major deadly coronaviruses and identify areas for improvement of future preparedness plans, as well as provide a critical assessment of the risk factors and actionable items for stopping their spread, utilizing lessons learned from the first two deadly coronavirus outbreaks, as well as initial reports from the current COVID-19 epidemic in Wuhan, China. Although the epidemic is still ongoing, initial lessons from its spread can help inform public health officials and medical practitioners in efforts to combat its progression.
Utilizing the Centers for Disease Control and Prevention (CDC, USA) website, and a comprehensive review of PubMed literature, we obtained information regarding clinical signs and symptoms, treatment and diagnosis, transmission methods, protection methods and risk factors for MERS, SARS and COVID-19. Additionally, the Chinese Center for Disease Control and Prevention (CCDC) was accessed for up-to-date information on COVID-19. Furthermore, verified news articles were also of interest in obtaining up-to-date case and fatality numbers on COVID-19. The Johns Hopkins University website was also utilized to access maps and spatio-temporal information regarding the virus. 2 SARS and MERS data were compiled from the WHO's latest situation report for creation of graphs and maps to compare the spatial distribution of the three coronaviruses.
Patients and the public were not involved in this research.
With respect to COVID-19, diagnosis was conducted initially by assessing clinical characteristics of the presenting patient, chest imaging and the ruling out of common bacterial and viral pneumonia. Once common bacterial and viral pathogens were ruled out, lower and upper respiratory tract specimens were obtained for cell culture and deep sequencing analysis. These specimens indicated a novel coronavirus initially known as '2019-nCoV'. 3 PCR, using the 'RespiFinderSmart22kit' (PathoFinder BV) realtime reverse transcription PCR (RT-PCR) assay, was used to detect viral RNA by targeting a consensus RNA dependent RNA polymerase region of pan b-CoV. 3 A diagnostic test was developed soon after viral isolation. Treatment in some hospitals involves prophylactic antibiotics to prevent secondary infection. 3 To date, no antiviral agent has been proven effective against COVID-19. Initial reports showed that oseltamivir was given to 93% of patients (orally administered 75 mg 2x/day) in combination with antibiotics. Patients experiencing severe illness (22%) were given corticosteroids (40-120 mg/day) to reduce lung inflammation due to high levels of cytokines caused by the virus, as part of a combined regimen for cases that were communityacquired and diagnosed at the designated hospital. 3 Since the combination of lopinavir and ritonavir was already available in the local hospital, a randomized controlled trial was initiated quickly to assess the efficacy and safety of combined use of lopinavir and ritonavir in patients hospitalized with COVID-19 infection. 3 A suspected case according to the WHO is a patient 'with severe acute respiratory infection (fever, cough and requiring admission to hospital), and with no other etiology that fully explains the clinical presentation and at least one of the following: a history of travel to or residence in the city of Wuhan, Hubei Province, China in the 14 days prior to symptom onset, or the patient is a health care worker who has been working in an environment where severe acute respiratory infections of unknown etiology are being cared for'. 11 A confirmed case is 'a person with laboratory confirmation of COVID-19 infection, irrespective of clinical signs and symptoms'. 11 On February 13 2020 the Hubei National Health Commission said it would include cases confirmed by clinical diagnosis using CT scans in addition to rt-PCR, adding several thousand new cases to the total count. Diagnosis of MERS by the WHO is defined initially as patients presenting with a fever, cough and hospitalization with suspicion of lower respiratory tract involvement. 12 Patient history was obtained upon hospitalization and prominent considerations for diagnosis involved a history of contact with probable or confirmed cases of the illness, or a reported history of travel or residence within the Arabian Peninsula. Severe cases were subjected to laboratory testing. 13 Similar to COVID-19, RT-PCR was used for diagnosis. Additional serum tests for antibodies of the virus were developed. In Saudi Arabia, a clinical trial revealed that a combination of lopinavir-ritonavir and interferon beta-1b was shown to be effective among MERS cases. 14 Additionally, a broad-spectrum antiviral nucleotide prodrug named remdesivir presented potent efficacy for the treatment of MERS coronavirus and SARS coronavirus in preclinical studies. 15, 16 A patient was considered to have laboratory-confirmed SARS if there was a positive RT-PCR result from two or more clinical specimens, either from different sites or tested in different laboratories, obtained from patients before or after death, or if there was seroconversion by enzymelinked immunosorbent assay, indirect fluorescent antibody test or neutralization assay. 17 Similar to MERS, serologic testing for IgG antibodies was developed for SARS coronavirus. Treatment of SARS involved combination therapy of lopinavir and ritonavir and was associated with substantial clinical benefit with fewer adverse clinical outcomes. 18 A broad-spectrum antiviral nucleotide prodrug named remdesivir presented potent efficacy for the treatment of MERS coronavirus and SARS coronavirus in preclinical studies. 15, 16 There are several similarities between these viruses in their diagnosis and treatment. All three viruses are definitively diagnosed by utilizing cell cultures of respiratory fluids, serum antibody analysis or RT-PCR analysis of respiratory fluids from patients. All three viruses cause pneumonia, and radiography of the lungs is an important diagnostic tool for preliminary and broad identification of the severity of the disease. These viruses are similarly treated with antiviral therapies, although no specific antiviral therapy has yet been approved for COVID-19, with clinical trials underway. The major difference between COVID-19 and its predecessors is that this virus rarely produces runny noses or gastrointestinal symptoms in those infected, which are commonplace in MERS and SARS cases. 3
There is limited knowledge regarding the transmission of COVID-19. Transmission has been confirmed to occur from human to human, and it is thought to be spread through respiratory droplets from coughs or sneezes. 3 Primary cases of COVID-19 have been traced back to the Huanan seafood market, with secondary cases occurring at hospitals among nurses and physicians who had extensive contact with COVID-19 patients. Furthermore, several individuals who did not have direct contact with the Huanan seafood market were diagnosed with the disease.
MERS is also transmitted from close person-to-person contact (primarily in health care settings during the symptomatic phase of the disease), although instances of this transmission were significantly less during the height of the MERS epidemic. The transmission occurs through respiratory secretions from coughing and sneezing, whereas primary cases of the virus have been traced to close contact with infected dromedary camels, the animals identified as the reservoir host for MERS. 19 Similarly, the transmission of SARS occurred during close person-to-person contact, via respiratory droplets from sneezing or coughing at a rapid rate, although not as quickly as the current outbreak of COVID-19. Furthermore, fomites, fecal transmission and handling of animals (killing, selling or preparing wild animals) were less common methods of transmission. 20 The modes of transmission, although still in part unclear regarding COVID-19, are thought to be the same mechanism for all three viruses. Infection via respiratory droplets or secretions of infected individuals are thought to be the predominant mode of transmission from human to human. The spread of infection for the current outbreak is occurring more rapidly than in the SARS epidemic. Rates of human-to-human transmission were generally lower for MERS, possibly in part due to the higher case fatality ratio (CFR) among those diagnosed with the disease.
The initial report on the first 41 cases of the COVID-19 outbreak showed that most patients infected with COVID-19 were males [30 [(73%) of 41], with less than half possessing underlying comorbidities [13 (32%)] which included diabetes, hypertension and cardiovascular disease. 3 The median age of cases was 49.0 years (Inter Quartile Range 41.0-58.0). Of the initial 41 patients infected, 27 (66%) had been directly exposed to the Huanan seafood market and the CFR was nearly 2%. 3 The CFR has remained at 2% since the start of the epidemic.
In contrast, MERS has a much higher CFR (35%), and due to the severity of the illness it often necessitates mechanical ventilation (in 50-89% of cases). 21 According to a study outlining the risk of mortality and severity of MERS cases from 2012-15, the mean age of all patients was 50 years (SD: 18), over half of the cases (51.1%) reported underlying comorbidities, whereas 7.6% reported direct contact with a camel. 22 Cases were predominantly male (66.6%) and from Saudi Arabia. 22 Overall, 35% of all patients who were diagnosed with MERS have died. The CFR was higher in Saudi Arabia (42%), whereas South Korea reported 19% with a range from 7% among younger age groups to 40% among older adults (60 years). 23 Older age and underlying comorbidities were identified as the predominant risk factors for progression of severe MERS. 22 The SARS virus had an overall CFR of 11%. 24 Women represented 55.7% of those diagnosed with SARS but had a lower CFR than men (13.2 and 22.3%, respectively). 24 About 49% of cases were <40 years old with a CFR of 3%, whereas 21.5% of cases were >59 years with the highest CFR among those >59 years (54.5%). Nearly a quarter of cases (23.1%) were healthcare workers with the lowest CFR (2%). 24 The CFRs across the three viruses range from 2 to 35% (see Figure 2) , with the highest among MERS cases and the lowest among the current outbreak, although it is important to note that the CFR for COVID-19 should be interpreted cautiously as the outbreak is still ongoing, and it is hypothesized that many cases are yet to be confirmed, due to a lack of RT-PCR kits in China. It is too early for comparisons to be made between the morbidity and mortality rates of the first two coronaviruses with the current epidemic.
Risk factors for COVID-19 are still largely unknown, however, it is believed that the virus was transmitted to humans via contaminated live animals (snakes, civet cats). All three beta coronaviruses emerged via zoonotic transmission. Risk factors for zoonotic transmission of SARS and MERS were direct contact with infected animals. The suspected reservoir hosts are currently believed to be bats, similar to the SARS epidemic. The focal point of the epidemic is the Huanan seafood market. SARS was also hypothesized to have arisen from one of these types of markets. These commonalities may signal the need for closure of these wholesale markets in China. China has a long history of live-animal markets considered vital to communities across the country. As such, it is unlikely that these markets will be closed permanently, although their closure would be the strongest deterrent to another zoonotic disease outbreak. Re-opening of these markets should be under strict purview of the CCDCP, and appropriate measures should be taken to ensure health and hygiene protocols that limit live-animal and human contact are used. Surveillance of these markets may be vital for controlling the spread of zoonotic diseases. Surveillance activities may be similar to that of novel influenza viruses undertaken by the US Center for Disease Control and Prevention.
The three viruses are similar in zoonotic transmission from infected animals to humans. The MERS virus reservoir host is the dromedary camel, the SARS reservoir hosts are likely bats. It is still unclear whether COVID-19 was zoonotically transmitted from an infected civet cat, snake or other animal at the Huanan seafood market. Figure 2 Infographic comparison of the three major beta coronaviruses, 10 February 2020.
Several challenges have been identified in preventing the spread of COVID-19. Among them are limited coordinated efforts among stakeholders with few policies in place for inter-sectoral collaboration, a lack of medical supplies (shortages of masks, goggles) and laboratory facilities for assessment of the disease. Additionally, many cases may have been asymptomatic; so it is difficult to predict when the epidemic will peak and introduces further difficulty in the detection of cases. A recent correspondence to the New England Journal of Medicine documented an asymptomatic contact in Germany. 25 In order to contain the virus, Chinese authorities prevented travel to, from and within the city of Wuhan on 23 January, as airline and railway departures were suspended. Between the 23 and 25 January, travel restrictions were implemented in 18 additional cities, affecting nearly 60 million people. The orders issued by Beijing were significant, but late in coming: the first official case of the virus was confirmed almost 2 months previously (8 December 2019) and the length of time it took for China's state-controlled media to reveal the nature of the illness was too great. This can be attributed to a failure of proper risk assessment and management by the Chinese government and health ministry. By the time travel was suspended, however, millions of Chinese citizens had passed through the affected region, unaware of the risks involved. SARS too, spread globally, due to many of the clinical features of the disease being unknown early in the course of the outbreak, and the significant variation in clinical infection control among South East Asian countries. Similar to the current situation in China, there were inadequate supplies of protective gear for the general population, which, had they been available, could have slowed the spread of the illness. SARS signs and symptoms presented rapidly, and health and hospital authorities were illprepared. This combined with insufficient communication of the Chinese government with the public led to panic. Additionally, the lack of infrastructure such as infectious disease hospitals in China added complexity to its control. The pandemic cost the global economy an estimated $30-$100 billion. 26 MERS on the other hand, did not spread globally rapidly, in part due to the lower risk of human-to-human transmission of the virus. Asymptomatic cases also provided an extra layer of complexity in the control of the disease. The biggest threat to the eradication of the disease is the variability and inadequacy of infection control in the region most heavily impacted by the virus (the Middle East). These inadequate infection control measures included a lack of physical barriers between patients, lack of negative pressure rooms and overall non-adherence to infection protocols such as proper hand hygiene and sanitation methods. 27 Parallels can be drawn between SARS and COVID-19 which can partially be attributed to their origins in China. Similar conditions led to the explosive spread of these viruses, including exposure to live animals at open markets, overcrowded conditions, lack of health infrastructure and lack of transparency between government officials and the general population. MERS and COVID-19 are also similar in that cases can remain asymptomatic while still spreading the disease. 25 These viruses all erupted with no specific vaccine or treatment recommended. This is hampering efforts by public health and medical practitioners to limit the spread of COVID-19.
Healthcare workers (HCWs) were infected at high rates during the MERS and SARS outbreaks, with 18.6% of MERS cases occurring in HCWs and 21% of SARS cases occurring in HCWs. 28, 29 HCWs were infected, in part, through the use of nebulizers, endotracheal suction and intubation, cardiopulmonary resuscitation, nasogastric feeding and high flow-rates of oxygen. The high risk presented by these procedures has implications for medical practice and organization of hospital care during the current infectious disease outbreak. The capacity of COVID-19 to infect healthcare workers has been confirmed, although comparisons with MERS and SARS cannot yet be made.
Inadequate risk assessment regarding the urgency of the situation and limited reporting on the virus within China has, in part, led to the rapid spread of COVID-19 throughout mainland China and into nearby and distant countries.
Collaboration between governmental agencies and outside organizations (i.e. the CDC and WHO) can prove key to combatting the spread of an epidemic through risk communication and disseminating public health information, as evidenced by the rapid response to the MERS outbreak in South Korea in 2015. 30 With respect to the current outbreak, the Chinese Ministry of Health shared the genetic sequencing of the COVID-19 virus 8 days after isolating this virus (10 January), which provided other countries with the ability to diagnose the virus quickly utilizing rapid testing methods, although government transparency at the start of the outbreak was not ideal. It took the Chinese government from 8 December, the first case of the virus, to the 3 January for the initiation of emergency monitoring, case investigation and investigation of the seafood market. Additionally, it took the government from 31 December, when the Wuhan Health Commission announced the outbreak, until 8 January for the government to publicly declare the novel coronavirus was the cause.
Compared with SARS and MERS, COVID-19 has spread more rapidly, due in part to increased globalization and the focus of the epidemic. Wuhan, China is a large hub connecting the North, South, East and West of China via railways and a major international airport. The availability of connecting flights, the timing of the outbreak during the Chinese (Lunar) New Year, and the massive rail transit hub located in Wuhan has enabled the virus to perforate throughout China, and eventually, globally (see Figure 3 ). 31
The new outbreak of respiratory illness caused by a novel coronavirus termed 'COVID-19' has emerged as a serious global public health concern. 32 The illness was first announced on 31 December 2019, 33, 34 and the rapid spread of the virus is fueling fears of a global pandemic. 35 During the initial period of the outbreak, from 8 December to 21 January 425 cases were identified, with a growing number of cases not linked to the Huanan seafood market. 1 The number of cases reported and documented from the declaration of the initial outbreak, to date, has grown exponentially. At the time of writing, there are now 42 820 cases of COVID-19, with 1014 deaths, and the virus has spread to >26 countries.
COVID-19 is a new strain of coronavirus not previously identified in humans. 36 Coronaviruses are zoonotic and are a large family of viruses that cause illness ranging from the common cold to more severe diseases, such as MERS and SARS. 21 Initially, many of the patients in the outbreak in Wuhan reported some link to a large seafood and live-animal market, suggesting zoonotic transmission. Scientists also believe that an animal source is 'the most likely primary source', and human-to-human transmission has occurred, with growing numbers of cases reportedly without exposure to animal markets. 3 Early in the outbreak, a top Chinese government-appointed expert stated a mysterious respiratory illness had killed at least four people with evidence of human-to-human transmission, heightening public concern. 36,37 It is likely that person-to-person spread will continue to occur, and similarities can be drawn more closely to SARS than MERS, because of the virus' rapid rate of infection. 38 When person-to-person spread occurred with SARS and MERS, it is thought to have happened via respiratory droplets produced when an infected person coughs or sneezes, similar to how other respiratory pathogens spread. 12, 29 This is the same hypothesized mechanism of transmission for COVID-19. The spread of MERS and SARS between people has generally occurred between close contacts, similar to the current epidemic in China. According to the WHO, common signs of COVID-19 infection include respiratory symptoms, fever, cough, shortness of breath and breathing difficulties. Serious cases can lead to pneumonia, severe acute respiratory syndrome, kidney failure and death. 39 The WHO has advised avoidance of 'unprotected' contact with live animals, to thoroughly cook meat and eggs, and avoiding close contact with anyone with cold or flu-like symptoms. Additionally, the CDC issued a travel statement to avoid all non-essential travel to China (Level 3 Travel Notice). 40 Control measures have been put into effect, although the timing of these measures and the relatively rapid spread of the current virus suggests that lessons from the previous SARS and MERS epidemics were not heeded.
The surge in infections is alarming due to increased international travel around the Lunar New Year (25 January) and Chinese business ties across the globe. Researchers should critically review the virus' genome sequence to ascertain the presence of human-to-human transmission, incubation period, modes of transmission, the common source of exposure and the presence of asymptomatic or mildly symptomatic cases that go undetected. Risk assessment is badly needed to control the impact of the virus. This risk assessment should include an evaluation of the current standard of epidemiologic surveillance and identifying the risk factors for human-to-human transmission from asymptomatic cases.
Since there is no specific treatment for coronaviruses, 39 there is an urgent need for global surveillance of humans infected with COVID-19. The combined role of internet of things (IoT) and related technologies can play a vital role in preventing the spread of zoonotic infectious diseases. Smart disease surveillance systems could enable simultaneous reporting and monitoring, end-to-end connectivity, data assortment and analysis, tracking and alerts. Remote medical assistance should also be adopted to detect and control zoonotic infectious disease outbreaks.
Airport authorities around the globe must take precautionary measures to reduce the risk of importation of COVID-19. Urgent actions such as screening air passengers traveling from China are needed to contain the spread of suspected COVID-19 cases. Patients with symptoms of respiratory diseases must be reported to the authorities. After further investigation, travelers that are symptomatic or fit the case definition for the novel coronavirus need to be sent to local hospitals for further management. However, this may prove difficult due to the asymptomatic nature of some cases. Wuhan alone has connections with more than 60 overseas destinations through its international airport, whereas Beijing, Shanghai and Shenzhen, all of which have reported cases, have hundreds more. Furthermore, airport authorities should also display alerts on the signs and symptoms of the virus, and preventive measures should be taken by travelers around the globe.
In order to halt the spread of the COVID-19 outbreak, affected countries should look to past successes and failures of beta-coronavirus spread. Lessons learned from the MERS and SARS outbreaks can provide valuable insight into how to handle the current epidemic. These include proper hand hygiene, isolation of infected individuals in properly ventilated hospitals (negative pressure rooms), isolation of individuals with suspected symptoms or fever, and preventing direct contact with suspected animal reservoir hosts. Unfortunately, some lessons were not heeded. Transparency was lacking between government officials and the public, which in turn led to the rapid spread of the virus. In contrast, the rapid sharing of the genetic sequencing of the virus has led to faster diagnoses on a global scale. The Chinese government was also able to quickly build hospitals in order to house the infected, although these steps came too late to prevent the spread. The top concern of health officials now should be halting the spread of the infection. Thus far, >40 000 people have been infected, with no end in sight to the epidemic. Basic epidemiological parameters of patients including person, place and time of diagnosis, as well as clinical signs and symptoms, outcome of the infection, severity, exposures and travel histories must be ascertained for each case. The role that live-animal markets have played in the SARS and COVID-19 epidemics highlight the need for a paradigm shift in China away from their use. Closure or suspension of these markets would be the most conservative approach to preventing zoonotic transmission, however China has a long history of live markets, and their permanent ban is unlikely. In this case, the use of proper hygiene and protocols for limiting animal-to-human contact would be ideal, as well as increased epidemiologic surveillance and monitoring. More research should be carried out on the development of effective methods to provide early and timely detection of such diseases. These methods have the potential to reduce morbidity and mortality. Future research should attempt to address the uses and implications of IoT technologies for mapping the spread of infection. Necessary effective measures need to be taken to avoid the unpredictable risk of continuing outbreaks in China and the possibility of a local outbreak turning into a global pandemic. The authors disclose no other sources of funding.
|
The global prevalence of diabetes mellitus (DM) has reached 463 million in 2019, and the World Health Organization (WHO) estimated that, worldwide, the number of people living with diabetes would increase to 700 million by 2045 [1] . Intensive glycemic control has been shown to be beneficial for the management of microvascular complications of diabetes [2, 3] . As a result, the prevalence of these diabetes-related complications might have also changed [4] .
Diabetes has been implicated in all-cause mortality, especially in deaths related to cardiovascular and cerebral vascular disease. [5, 6] Additionally, diabetes is often associated with premature deaths from noncommunicable diseases [7] as well as communicable diseases, including SARS [8] , MERS, H1N1 [9, 10] and the COVID-19 [11, 12] . In 2016, an estimated 1.6 million deaths were directly caused by diabetes. Another 2.2 million deaths were attributable to high blood glucose in 2012 [13] . Collectively, mortality of diabetes is often determined by prevalence of diabetes, clinical care, self-management behaviors and risk-factor control. Given that clinical guidelines and recommendations regarding some of these factors have been considerably modified during the past two decades, diabetes-related deaths from microvascular complications might have changed due to tight glycemic control. However, the global burden of vascular complication-related mortality remains unknown. Understanding the mortality trends of vascular complication-related events will help evaluate the current clinical hypoglycemic treatments and guide future diabetes management. Therefore, we aimed to investigate the global burden and trends of diabetic deaths due to vascular complications.
We identified countries with available data of all-cause diabetes-related deaths from the WHO Mortality Data-Base (WDB), which provides the most comprehensive (standardized national) mortality statistics for countries around the world [14] . The data from the WDB are official national statistics in the sense that they have been transmitted to WHO by the competent authorities of the countries concerned. The WDB comprises deaths registered in national vital registration systems, with underlying cause of death as coded by the relevant national authority. Underlying cause of death is defined as "the disease or injury which initiated the train of morbid events leading directly to death, or the circumstances of the accident or violence which produced the fatal injury" in accordance with the rules of the International Classification of Diseases (ICD). Therefore, there is primary cause-specific attribution of the ICD code to mortality in the death certificates. The WDB contains number of deaths by country, year, sex, age group and cause of death as far back from 1950. Data are included only for countries reporting data properly coded according to the ICD. We used ICD-10 (version 2019) with detailed codes (4th character, ICD-104) to identify all diabetes-related deaths (E10-E14 codes), which include deaths from type 1 DM (E10), type 2 DM (E11), malnutrition-related DM (E12), other specified DM (E13) and unspecified DM (E14). The ICD-10 was endorsed in May 1990 by the Forty-Third World Health Assembly. From 1990 to 2017, a total 10 editions of ICD-10 were released by WHO with the latest edition released in 2016 [15] . We compared all ICD-10 editions and found consistent codes for diabetes mellitus and diabetic complications (E10-E14) during this period. For countries with available ICD codes, we stratified the data by diabetic complications. Vascular complications included in this study for analysis were renal complications (diabetic nephropathy, intracapillary glomerulonephrosis, Kimmelstiel-Wilson syndrome), ophthalmic complications (cataract, retinopathy), neurological complications (amyotrophy, autonomic neuropathy, mononeuropathy, polyneuropathy, autonomic) and peripheral circulatory complications (gangrene, peripheral angiopathy, ulcer), Detail ICD-10 codes for diabetes mellitus and vascular complications used in this study were shown in Additional file 1: Appendix S1. Population data were derived from the WHO Mortality Database and the World Population Prospects 2019 [14, 16] .
We identified 135 countries from the database that had included diabetes as a cause of death. Among them, 18 countries used ICD-103 and were therefore excluded. We then extracted mortality data of a 17-year period between 2000 and 2016. Eventually, 108 countries were included for the final analysis (Additional file 1: Appendix S2). Available years of diabetic vascular complicationrelated deaths, total diabetes-related deaths and mid-year population were provided in Additional file 1: Appendix S3-S5.
To investigate the time trends of vascular complicationrelated deaths, we first estimated the crude proportions and rates using the number of deaths reporting any of renal, ophthalmic, neurological and peripheral circulatory complications as the numerator, divided by the number of all-cause diabetes-related deaths. Age-standardized proportions and rates were then calculated using mid-year population. The direct standardization method was performed to calculate age-standardized proportions and rates using the age-specific (0-4 years, 5-9 years, 10-14 years, 95-99 years, > 100 years) numbers of diabetic deaths and populations. Secondly, we compared crude and age-standardized proportions and rates of each country, taking the overall estimates of all countries as the reference. Data were reported as odds ratios (ORs). We also compared the differences of the proportions and rates by age groups, regions, DM types, and domestic incomes (based on the World Bank classification) [17] .
Using the crude and age-standardized proportions and rates, we performed joinpoint regression to investigate the time trends of diabetic vascular complication-related deaths. This approach has been widely used to study time trends in mortality from causes including infectious [18] and non-infectious diseases. [19] Descriptions of joinpoint regression were shown in Additional file 1: Appendix S6. Briefly, the joinpoint analysis identifies the best fit for inflexion points ("joinpoints") at which there is a significant change in trends using a series of permutation tests, with Bonferroni adjustment for multiple comparisons. The joinpoint regression is different than other similar models, like piecewise regression because it has the constrain of continuity at the change-point (s) and the choice of the number of joinpoint (s). Single segment trends were expressed as annual percentage changes (APC), while the average annual percentage changes (AAPC) was used to summarize measures of the overall trend. We used the Z test to assess whether an APC or AAPC was significantly different from zero. Joinpoint regression analysis was performed for the overall estimates, and for subgroups by sex, type of complications, regions and domestic income. Lastly, we also used the calendar year 2000 as the reference to compare trends of ORs in different subgroups.
We used Microsoft Excel 2010 for data extraction, sorting, and cleaning. IBM SPSS Statistics (version 25.0) and Joinpoint Regression Program (version 4.7.0.0) were used for data analysis. Results were reported with absolute number, percentage and 95% confidence interval (95% CI). Proportions were reported as per 1000 deaths, and rates were reported in 100,000 person-years. A p value less than 0.05 was considered statistically significant.
Between 2000 and 2016, a total of 7,108,145 diabetes-related deaths were reported in the 108 selected countries, including 1,904,787 cases (26.8%) that were attributed to vascular complications. By type of diabetes, there were 88,479 cases of T1DM, 687,959 cases of T2DM, 11,466 cases of malnutrition-related DM, 2326 cases of other specified DM and 1,114,557 cases of unspecified DM. Among the 1,904,787 deaths from vascular complications, the largest proportion was attributed to diabetic nephropathy (1,355,085 cases, 71.1%), followed by peripheral circulatory complications (515,293 cases 27.1%), diabetic neuropathy (28,697 cases, 1.5%) and diabetic retinopathy (5751 cases, 0.3%).
Over the 17-year study period, the overall age-standardized proportions of diabetic vascular complicationrelated deaths were 267.8 (95% CI 267.5-268.1) cases per 1000 deaths. The highest proportion was found in Singapore (759.6 cases per 1000 deaths, 736.8-782.5), while the lowest was in Sri Lanka (10.9 cases per 1000 deaths, 8.7-13.1) (Additional file 1: Appendix S7). Eighty-four countries (77.8%) showed an age-standardized proportion higher than 100 per 1000 deaths. Compared to the overall mean, 68 countries (63.0%) had an OR < 1, while 40 (37.3%) countries had an OR > 1, ranging from 0.05 (0.04-0.06) in Siri Lanka to 3.37 (3.12-3.65) in Singapore ( Fig. 1 ; Additional file 1: Appendix S8). The age-standardized proportions were higher in men (279.6 cases per 1000 deaths, 278.7-279.7) than in women (257.8 cases per 1000 deaths, 257.4-258.3) (Additional file 1: Appendix S9). The proportions of death due to vascular complications were higher in older persons than in younger individuals, the highest age-standardized proportions were found among 45-64 years (311.2 cases per 1000 deaths, 310.5-311.9), while the lowest were found among < 20 years (78.0 cases per 1000 deaths, 73.4-82.6) (Additional file 1: Appendix S10). These differences were consistent in both men and women.
Between 2000 and 2016, the overall age-standardized rates of vascular complication related deaths were 53.6 (53.5-53.7) cases per 100,000 person-years (Additional file 1: Appendix S12). The highest rate was found in Mauritius (496.9 cases per 100,000 person-years, 490.1-503.7), while the lowest rate was documented in Siri Lanka (0.45 cases per 100,000 person-years, 0.38-0.52) (Additional file 1: Appendix S7). Eighty-one countries (80.2%) had an age-standardized rate less than 100 cases per 1000 deaths. Compared to the overall mean, 67 countries (66.3%) had an OR < 1, while 34 (33.7%) countries had an OR > 1, ranging from 0.006 (0.005-0.007) in Siri Lanka to 7.15 (6.98-7.32) in Mauritius ( Fig. 1 ; Additional file 1: Appendix S8). Vascular complication related death rates were higher in older persons than for their younger counterparts. The highest rate was observed in the age Age -standardized proportions 3350 5150 4260 4007 3325 3340 4055 2340 2310 2120 3300 4184 4308 2280 4186 2130 3090 4190 2170 4240 4350 4020 4230 2070 2020 2450 2090 4150 2110 4280 4180 4210 4050 4045 3190 2445 2380 4220 2045 4030 4084 2400 3400 2240 2010 2290 2440 2420 2260 2060 4160 2040 3285 5070 2140 3160 4140 2150 3030 4200 4010 2470 4330 4038 2180 4188 2350 4310 1300 4170 4085 2430 2455 1365 2360 4320 5020 4290 4300 2030 2385 3170 2025 2460 2250 4274 3080 2210 2230 1303 2050 4273 2300 3150 2005 1360 2160 3255 1520 2370 4270 4080 2410 2270 2190 3380 1300 1365 2470 2230 3190 2140 2360 2350 2070 2380 2120 3300 4010 2400 4200 2010 2150 2290 3150 4240 2210 2450 4184 3080 2090 4230 2300 3160 2460 4210 4190 4050 3030 4007 4280 3400 4188 2190 3090 4310 4084 2280 4080 4140 3325 4274 1520 4160 4273 2270 3365 2005 2050 2445 2310 2340 4350 2180 2045 2430 5070 2440 2420 2250 2030 2260 2455 5150 2130 2025 4038 2020 1360 4085 2170 4330 4150 3170 2240 5020 2040 4260 4290 4180 4300 4045 4020 4320 3350 4186 4308 1303 4170 2370 4220 3255 4055 4270 1310 3285 4030 3380 3340 2060 2385 2160 2110 2410 Age-standardized rates 1300 1365 2470 2230 3190 2140 2360 2350 2070 2380 2120 3300 4010 2400 4200 2010 2150 2290 3150 4240 2210 2450 4184 3080 2090 4230 2300 3160 2460 4210 4190 4050 3030 4007 4280 3400 4188 2190 3090 4310 4084 2280 4080 4140 3325 4274 1520 4160 4273 2270 3365 2005 2050 2445 2310 2340 4350 2180 2045 2430 5070 2440 2420 2250 2030 2260 2455 5150 2130 2025 4038 2020 1360 4085 2170 4330 4150 3170 2240 5020 2040 4260 4290 4180 4300 4045 4020 4320 3350 4186 4308 1303 4170 2370 4220 3255 4055 4270 1310 3285 4030 3380 3340 2060 2385 2160 3), while the lowest was found in persons of age < 20 years (0.10 cases per 100,000 personyears, 0.09-0.11) (Additional file 1: Appendix S13). By region, North America had the highest age-standardized death rate due to vascular complications (94.7 cases per 100,000 person-years, 94.5-95.0), followed by South America (66.0 cases per 100,000 person-years, 65.7-66.2) and Oceania (50.8 cases per 100,000 personyears, 50.1-51.4). By contrast, the Middle East (11.2 cases per 100,000 person-years, 11.1-11.4) and Africa (13.0 cases per 100,000 person-years, 12.7-13.2) had the lowest rates (Additional file 1: Appendix S14). By complication type, the highest rate was found in renal complications (38.1 cases per 100,000 person-years, 38.0-38.2), followed by peripheral circulatory complications (11.7 cases per 100,000 person-years, 11.6-11.8), neurological complications (0.87 cases per 100,000 person-years, 0.86-0.88), and ophthalmic complications (0. 16 (Fig. 2) . The trend of OR also varied by different subgroups (Additional file 1: Appendix S15, S16). The joinpoint model analysis showed that the overall AAPC of rate was 1.9% (p < 0.05, 1.4%-2.4%). Both men (AAPC = 2.2%; 1.7%-2.7%; p < 0.05) and women (AAPC = 1.6%; 1.1%-2.1%; p < 0.05) showed an upward trend during the study period. However, the trend varied across different subgroups. As shown in Table 2 and Fig. 3 , By DM types, the AAPC was increased in patients with T2DM (AAPC = 7.2%; 6.3%-8.2%; p < 0.05) but decreased in persons with T1DM (AAPC = − 8.7%; − 14 to − 3.1%; p < 0.05). Similarly, by complication types, the AAPC was higher for diabetic nephropathy (AAPC = 2.7%; 1.2% to 4.3%; p < 0.05) and diabetic neuropathy (AAPC = 1.7%; 0.4-2.9%; p < 0.05), but lower for diabetic peripheral circulatory complications (AAPC = − 4.5%; − 5.0 to − 3.9%; p < 0.05). By regions, as shown in Fig. 4 , trend of annual rate increased in Africa, Caribbean, Europe, Middle East, North America and Oceania, but decreased in Asia and South America.
In this study, we reported on the burden of vascular
Over the last two decades, prevalence of diabetes increased rapidly despite promoted programs of prevention and intervention, [1] As a consequence, the prevalence of diabetic complications may also have changed accordingly. [20] Edward and colleagues demonstrated a downward trend of diabetes-related complications between 1990 and 2010, including end stage renal disease (ESRD) and lower extremity amputation (LEA), which had declined by 29% and 53%, respectively. [4] Harding et al. summarized the evidence and concluded that, [21] with regard to vascular complications, incidences of LEA [22, 23] , diabetic retinopathy [24] and ESRD [4, 25] decreased, while diabetic neuropathy increased during the last few decades. Given these findings, the mortality rate due to diabetic vascular complications seemed to have declined during the same period. However, in this study, we found that the overall trend of diabetes-related deaths due to vascular complications continued to increase between 2000 and 2016. Specifically, an upward trend was observed in renal and neurological complication-related deaths, while a downward trend was observed in peripheral circulatory and ophthalmic complication-related deaths. Additionally, we found that the mortality trend due to vascular complications in T1DM decreased during 2000-2010 but increased during 2010-2013, yet the turning points were not present in the combined analysis. On the other hand, for T2DM, the upward trend of mortality was persistent during 2000-2016. These findings indicated that hard endpoints related to vascular complications in T2DM present a serious challenge for diabetes care and management.
In this study, we observed that the mortality trends varied by type of vascular complications. The overall proportions and rates of deaths due to vascular complications had increased 43.8% and 30.4%, respectively. However, this increase was primarily driven by growth in the renal complications, which had increased 74.8% and 59.8%, respectively, during the study period. This finding stressed the fact that diabetic nephropathy [21] Diabetes is the leading cause of ESRD in the United States and the 5-year survival for patients with ESRD is less than 40%. [27, 28] Additionally, diabetic nephropathy was diagnosed using albuminuria historically. Over the past two decades, non-albuminuric reductions in eGFR are increasingly recognized as diabetic nephropathy. Previously ESRD mortality might have been attributed to hypertension but more recently might have been attributed to diabetic nephropathy. [21, 29] Therefore, the reported increase in diabetic nephropathy related mortality might be affected by a change in understanding of diabetic nephropathy. Of note, intensive glycemic control has been widely studied for its benefits in renal outcomes [30] . In a meta-analysis of clinical trials (ACCORD, ADVANCE, UKPDS, and VADT), Zoungas and colleagues found reduced kidney events with intensive glucose control [2] . Another study showed consistent reductions in microalbuminuria surrogates [31] . However, these outcomes do not necessarily translate to downstream improvements in their corresponding hard endpoints [32, 33] . In contrast, almost all studies have consistently shown a significant increase in severe hypoglycemia and related mortality [34] [35] [36] [37] . Correspondingly, CKD is closely correlated with hypoglycemia and may increase the risks of hypoglycemia-induced death and cardiovascular events [38, 39] . Hence, the pros and cons of intensive glycemic control for patients with diabetic nephropathy should be carefully balanced [42] .
This study also demonstrated differences in mortality trends by countries, geographic areas, domestic incomes and time scales. These findings could be explained by pathophysiological differences among individuals of diverse ethnicity, cultural background or variations in diabetes control. Bullock et al. demonstrated that, between 2000 and 2013, the incidence of ESRD with diabetes for American Indian/Alaska Native had declined by 28%, followed by Hispanic (22%), non-Hispanic white (14%) and non-Hispanic black people (13%) in the United States. Yet, during the same period, ESRD incidence remained relatively stable in Asian individuals with diabetes [25] . Additionally, income may also contribute to variable rates of death. A study conducted in England showed that risks of inpatient admission for diabetes increases with socioeconomic deprivation. [40] Reports from IDF have shown that availability of diabetic drugs differs tremendously across income groups. The availability of diabetes supplies ranged from ~ 80% in high-income countries to < 15% in low-income countries. Year
Year
Year Year
Year
Year
Year
The increasing trend of deaths from microvascular complications may correlate with the rising trend of deaths from macrovascular complications [42, 43] . Vascular impairment is closely associated with diabetes. On the one hand, T2DM has a substantial effect on micro-and macrovascular disease and all-cause mortality rates in all age groups, having more pronounced impact on younger individuals [44] . On the other hand, vascular dysfunction may also contribute to the development of T2DM [45] .
In patients with T2DM, carotid atherosclerosis parameters predict both cardiovascular and renal outcomes leading to an improved renal risk stratification [46] . An elevated triglyceride-glucose index (TyG index), which is a surrogate of vascular impairment, has been shown positively associated with diabetes prognosis [47] and nephric microvascular damage [48] . Co-existing microvascular and macrovascular complications are frequently observed in patients with diabetes. In this regard, there are considerable difficulties in issuing a death certificate by differentiating microvascular complication from macrovascular complications. The presence and severity of microvascular complications contribute independently to increase the risk of all-cause mortality and cardiovascular events [49] . Therefore, microangiopathy and macroangiopathy may no longer be considered as entirely separate entities but a continuum of systemic vascular damage [50] .
To our knowledge, this was the first study to evaluate the mortality trends of diabetic vascular complications in the global landscape. However, there were several limitations. First, we used the WHO mortality database, in which data of some countries with a significant diabetic population, including such as China and India, where accumulate the largest number of people with diabetes mellitus worldwide, are not available. Therefore, the mortality trends of those countries were not reflected in our estimation. Nevertheless, our methodology can be applied to other country-level data for a better understanding of the mortality trend of diabetic vascular complications. Second, the ICD codes used by different countries to identify underlying cause of death may also generate potential bias. In this study, the highest overall age-standardized proportion of diabetic vascular complication-related death was found in Singapore (759.6 cases per 1000 deaths, 95% CI 736.8-782.5), while the lowest was in Sri Lanka (10.9 cases per 1000 deaths, 95% CI 8.7-13.1). The difference in mortality might result from the assigned code as unspecified diabetes mellitus without complications (E14.9). The percentage of E14.9-coded deaths was 1.48% (16/1664) in Singapore but 97.64% (8336/8535) in Sri Lanka. Additionally, patients with diabetes, especially the elderly, often have several complications including hypoglycemia, cardiovascular disease and stroke. Deaths from these complications may be the result of vascular damage but recorded as other causes in different countries. Moreover, LEA is often used as a neurological end point based on the observation that many amputations are a result of diabetes neuropathy and ulcers/infections in the insensate foot. But this is not recorded as a neuropathy end point basing on the ICD rules. Therefore, the comparability and representatives of mortality might be affected across different countries even though the data available from the WBD comprise deaths registered in national vital registration systems and underlying cause of death be coded by the relevant national authority. Third, since certain years of data in some countries were missing, we were not able to analyze trends by country, but estimated the mortality trend with aggregated regional data. Last, estimates of proportions and rates were not based on diabetes prevalence over time as such data were not available. Instead, we used total diabetes deaths, which was indicative of the burden of diabetes.
In conclusion, we have demonstrated an upward trend of diabetic vascular complications related deaths during the period of 2000-2016. This rising trend of deaths was mainly the result of renal complication in T2DM. Although morbidity of diabetic vascular complications had declined in the past few decades, mortality had continued to climb. This finding indicates an urgent need for developing and implementing effective strategies to reduce the burden of diabetic vascular complications particularly renal damages.
Supplementary information accompanies this paper at https ://doi. org/10.1186/s1293 3-020-01159 -5.
Additional file 1. Appendix S1: Details of ICD codes for diabetes mellitus and for diabetic vascular complications used in this study. Appendix S2: Flow chart of countries selection. Appendix S3: Available years (grey) for diabetes microvascular complication deaths, by country. Appendix S4: Available years (grey) for total diabetes deaths, by country. Appendix S5:
Available years (grey) for midyear population, by country. Appendix S6:
Description of joinpoint regression model. Appendix S7: Crude and agestandardized proportions and rates of diabetes microvascular complication related deaths, by country. Appendix S8: Crude and age-standardized odd ratios of proportions and rates compared to overall, by country. Appendix S9: Crude and age-standardized proportions from 2000 to 2016, by sex. Appendix S10: Crude age specific proportions from 2000
|
The swine industry plays an important role in feeding the world, as pork is one of the highest consumed animal proteins in the world [1, 2] . Emerging and re-emerging viral infectious diseases have been posing great challenges to the swine industry, among which porcine reproductive and respiratory syndrome (PRRS) is one of the most devastating diseases [3, 4] . PRRS virus (PRRSV) is the causative agent of PRRS which contains a positive-sense, single-stranded, polyadenylated, 15 kb RNA genome [5] . PRRSV is categorized into two genotypes, type 1 (European type) and type 2 (North American type), which differ by approximately 40% at the genomic level between the two genotypes [6] [7] [8] , and strains within each genotype also vary considerably with genomic differences as high as 20% [9] .
Globally, PRRS remains a threat to the swine industry despite many years of combined efforts to combat and control infection and disease [10] . One of the challenges for PRRSV control is the frequent recurrence of PRRS outbreaks in swine farms [11] , with a prediction that of farms reporting an outbreak today, 71% will have a recurrence of PRRSV infection within the following two years [12] . The PRRS recurrence is either caused by introduction of a new strain or the resident virus strain. The knowledge of which type of PRRS recurrence is crucial to determine the necessary control methods. Controlling
PRRSV RNA was extracted from cell culture supernatants, virus-negative pig serum spiked with PRRSV, and clinical PRRSV-positive serum samples using the QIAamp Viral RNA mini kit (Qiagen, Germantown, MD) following manufacturer's instructions without the addition of carrier RNA and with a final elution in 50 µL nuclease-free water. A high concentration PRRSV stock (supernatants from virus grown in MARC-145 cells) was extracted to generate a large amount of high concentration RNA for whole genome sequencing. Generation of known concentrations of the virus in serum samples (spike-in samples) was performed by adding the PRRSV stock to virus-negative pig serum, half of which was used for sequencing and the other half for determining the number of viral copies present. For clinical samples, RNA was extracted from 300 µL of serum, two thirds of which was used for sequencing and the remaining third was used to determine the number of viral copies present. Viral copies were determined using an RT-qPCR assay as described previously using a standard curve to determine the number of viral copies and then calculating the total number of copies sequenced [48] .
Since MinION RNA sequencing requires a high amount of input RNA for library preparation (>500 ng), lower viral RNA concentration samples were supplemented with exogenous cellular RNA for sequencing library preparation. Although lower amounts of RNA can be used, adding exogenous mRNA allows for protection of the flow cells, consistency between samples, especially those with low amounts of RNA, and testing of the method for use with clinical samples such as cells or tissues which would contain cellular mRNAs. This exogenous cellular RNA was obtained by extracting total RNA from MARC-145 cells using the Qiagen RNeasy mini kit (Qiagen, Germantown, MD, USA) according to the manufacturer's protocol with the addition of on-column DNAse digestion. When needed, concentration of RNA was performed using a SpeedVac lab concentrator (Savant, NY, USA). A Qubit 3.0 fluorometer (Life technologies, Carlsbad, CA, USA) and a Nanodrop1000 spectrophotometer (Thermo Scientific, Waltham, MA) were used for quantitative and qualitative assessments.
Sequencing libraries were generated from 600 ng of extracted viral RNA or a combination of viral RNA and exogenous cellular RNA using the direct RNA sequencing kit (Oxford nanopore Technologies Ltd, Oxford, UK) according to the manufacturer's protocol [41] . Since the PRRSV genome contains a 3' poly(A) tail, the standard protocols and DRS adapter provided by Oxford Nanopore were able to be used. The sequencing library was then loaded onto a R9.4.1 SpotON flow cell and sequenced using a MinION Mk I sequencer (Oxford nanopore Technologies Ltd, Oxford, UK) which was connected to a computer and remotely controlled by the MinKNOW software (Oxford nanopore Technologies Ltd, Oxford, UK). The estimated yield was monitored in real-time, samples were sequenced for approximately 6 hours and adjusted for more or less time if needed.
For evaluation of whole viral genome generation from MinION direct RNA sequencing, two duplicate runs were performed starting with 600 ng PRRSV VR2332 genomic RNA. Sequencing of mixed-strain samples combined 300 ng of VR2332 RNA and 300 ng of strain 1-7-4 or SDEU RNA, or 600 ng VR2332 RNA total as a control. Other samples that contained less than 600 ng of PRRSV RNA, such as clinical samples, were supplemented with exogenous cellular RNA to obtain a total of 600 ng RNA for use in library preparation.
Basecalling of raw reads was performed using Albacore (Oxford nanopore Technologies Ltd, Oxford, UK) to generate FASTQ files. Total yield, total reads, read quality, and read length from whole genome sequencing were analyzed using NanoPlot [49] . To obtain raw error rates and error patterns, sequencing reads were mapped to the VR2332 reference sequence using minimap2 [50] , processed with SAMtools [51] to generate BAM files, and then evaluated by AlignQC [52] .
A consensus genome was generated using the longest PRRSV read from the sequencing data as a scaffold. The longest PRRSV read was extracted from the FASTQ file using an awk command, all other raw reads were then mapped to this sequence using minimap2 [50] , and then the map file was processed using Racon [53] . A comparison of this consensus genome to the reference genome was analyzed by pairwise alignment using Geneious software (version 8.0.5) [54] . Depth of coverage across the consensus genome was analyzed using Qualimap [55] . The average coverage and accuracy across the genome were then evaluated using a window size of 1000 bp and visualized using GraphPad Prism 8 (GraphPad Software, San Diego, CA, USA).
The analytical sensitivity of MinION direct RNA sequencing was analyzed by examining the sequencing yield needed for viral strain detection, as well as the number of viral copies needed to generate detectable viral sequence. The sequencing yield needed for viral strain detection was examined by generating datasets with targeted yields ranging from 3000 to 30,000,000 bases from the two whole genome sequencing runs. Specifically, the text summary of the sequencing file from basecalling was analyzed using R (version 3.4.0) [56] and groups with the desired yields were generated by setting a cutoff at the sequencing time in which the desired yield was reached. Examination of the number of viral copies needed in a sample in order to detect the virus was performed by sequencing viral RNA extracted from cell supernatant samples, spike-in samples, and clinical samples containing different amounts of virus. Because samples with a relatively low number of viral copies yielded low amounts of viral RNA, exogenous cellular RNA was added to achieve efficient library production. Following sequencing of the libraries containing both viral RNA and cellular RNA, the PRRSV sequences needed to be extracted for further analysis. First, a custom PRRSV sequence database containing 951 PRRSV whole genome sequences was generated by downloading all PRRSV whole genome sequences available in GenBank (949 sequences including our VR2332 strain, download date: Nov 2018) with the addition of sequences from our SDEU and 1-7-4 lab strains. Then, the PRRSV reads were able to be identified and obtained by mapping the raw sequencing reads to this custom PRRSV database using minimap2 [50] and extracting the mapped reads using SAMtools [51] .
Identification of the viral strain present in the sample was examined using basic local alignment search tool (BLAST) with a significance filter of expect value (E) < 10 −50 to examine the PRRSV sequence reads. The PRRSV raw reads were compared to the custom PRRSV database using nucleotide BLAST (BLASTn) and the top match, based on bit score, was regarded as the strain detected in the sample. This detected sequence was then aligned to the known reference genome using Geneious software version R8.0.5 [54] and the percent identity was recorded to show the accuracy of detection. For supernatant and spike-in samples, both the VR2332 whole genome and the ORF5 sequence were known and designated as the reference sequence to compare to the MinION generated sequences. For clinical samples, only the ORF5 sequence was known and was used as the reference sequence for comparison. A consensus genome was generated, if possible, for each dataset or sample using the longest PRRSV read as a scaffold followed by analysis of consensus length and accuracy as described above.
Linear regression analysis was performed to compare PRRSV sequencing reads to viral RNA copies using GraphPad Prism 8 (GraphPad Software, La Jolla, CA, USA). In order to normalize among Viruses 2019, 11, 1132 5 of 18 different sequencing runs with varying total reads, the ratio of PRRSV reads to total reads was used to allow for comparison. The viral RNA copies were determined by RT-qPCR and reported as total viral copies per sequencing run.
Samples containing a mixture of two viral isolates, or VR2332 alone as a control, were sequenced as above. In order to identify the yields needed for accurate strain detection and differentiation, datasets with yields from 30,000 to 30,000,000 bases were generated randomly from total reads using fastq-tools (https://homes.cs.washington.edu/~{}dcjones/fastq-tools/). PRRSV reads were extracted by mapping all reads to the PRRSV database using minimap2 [50] . In order to detect PRRSV strains, PRRSV reads were first BLASTn analyzed to identify the top BLAST hit as determined by bit score (BLAST filter of E < 10 −50 plus alignment identity >80% and length >900 bp). Then, all PRRSV reads were mapped to this top BLAST hit using minimap2 with the "map-ont" preset option [50] and mapped reads were extracted using SAMtools [51] . The unmapped reads were also extracted and were analyzed against the PRRSV database a second time to detect any other strain existing in the same sample. The top BLAST hit was recorded and the mapped and unmapped reads to the second top match were again separated. This was repeated until no PRRSV strain was detected in the extracted unmapped reads. The read length and accuracy were based on the results of the analytical sensitivity experiment, where the detection limit was approximately 900 bp and 80% identity. The top BLAST hits were compared to the targeted known strain (1-7-4, SDEU, or VR2332) and the percent identity was recorded. The percentages of reads matching the detected isolates to total PRRSV reads were also recorded.
The investigation of previous-run contamination was conducted by extracting all reads from the suspected sequencing results that mapped to the reference sequence of the contaminating strain. The "read_id" of the contaminating reads were extracted using SAMtools. As an indication of when during the sequencing run the contaminating read was observed, the "start_time" that matched the "read_id" of the contaminating reads was extracted using R (version 3.4.0) [56] . The number of total contaminating reads over the time course of the sequencing run was analyzed using GraphPad Prism 8 (GraphPad Software, La Jolla, CA, USA).
The main bioinformatic methods and codes used in this study can be found here: https://github. com/ShaoyuanTan/PRRSVproject.
The sequencing data has been deposited to NCBI Sequence Read Archive (SRA) under accession numbers: SRR10292736 to SRR10292741.
A high concentration cell culture grown PRRSV VR2332 stock was used for RNA isolation and evaluation of MinION direct RNA whole genome sequencing. PRRSV RNA was extracted using the QIAamp Viral RNA mini kit, which has shown consistently good performance in several studies [57, 58] . A total of 600 ng RNA was used for library preparation and sequencing, which was performed in duplicate. Since the whole genome sequencing was under ideal conditions using 600 ng RNA starting material, one-hour of sequencing was sufficient to generate more than enough reads for sequence analysis (Table 1) . Raw reads from the first hour of sequencing were extracted and evaluated for yield, read quality, read length, raw error rates, and consensus generation ( Table 1 ). Both sequencing runs generated more than 20 megabases (mb) total yield within one-hour of sequencing with the longest raw read over 15,000 bp in length, very close to the full length VR2332 reference sequence (15,182 bp) (Table 1) . Interestingly, the majority of the reads were fairly small with only 11-12 reads over 10,000 bp and only 53-73 reads over 7500 bases for the two sequencing runs. Comparing the longest raw read to the VR2332 reference sequence gave an identity of approximately 86.5%, and the sequence accuracy improved to 95.4% after generating a consensus using the longest raw read as a scaffold (Table 1) . Further examination of the error rates between the raw reads and the reference sequence identified total error rates at 13.9%, including 6.3% deletion (45% of total error), 4.1% mismatch (30% of total error), and 3.5% insertion (25% of total error) error types ( Figure 1a ). Of note, error patterns showed that insertion and deletion of U(T) nucleotides, and C/U(T) mismatches were the most frequently observed error patterns ( Figure 1b ). Raw reads from the first hour of sequencing were extracted and evaluated for yield, read quality, read length, raw error rates, and consensus generation ( Table 1 ). Both sequencing runs generated more than 20 megabases (mb) total yield within one-hour of sequencing with the longest raw read over 15,000 bp in length, very close to the full length VR2332 reference sequence (15,182 bp) ( Table 1) . Interestingly, the majority of the reads were fairly small with only 11-12 reads over 10,000 bp and only 53-73 reads over 7500 bases for the two sequencing runs. Comparing the longest raw read to the VR2332 reference sequence gave an identity of approximately 86.5%, and the sequence accuracy improved to 95.4% after generating a consensus using the longest raw read as a scaffold (Table 1) . Further examination of the error rates between the raw reads and the reference sequence identified total error rates at 13.9%, including 6.3% deletion (45% of total error), 4.1% mismatch (30% of total error), and 3.5% insertion (25% of total error) error types (Figure 1a ). Of note, error patterns showed that insertion and deletion of U(T) nucleotides, and C/U(T) mismatches were the most frequently observed error patterns (Figure 1b) . To obtain raw error rates and error patterns, raw reads were mapped to the VR2332 reference sequence, followed by evaluation of the mapping. (a) The percent of each error type is shown as well as the total error rate. (b) The error patterns of insertions (first row with darker pink indicating higher errors), deletions (first column with darker orange indicating higher errors), and mismatches (center matrix with darker red indicating higher error). The U bases in the query sequence were adjusted to T automatically by the minimap program in order to map to the reference sequence which was DNA. The depth of coverage across the PRRSV genome was observed to be extremely uneven with higher coverage on the 3' end of the genome and gradually decreasing towards the 5' end, which To obtain raw error rates and error patterns, raw reads were mapped to the VR2332 reference sequence, followed by evaluation of the mapping. (a) The percent of each error type is shown as well as the total error rate. (b) The error patterns of insertions (first row with darker pink indicating higher errors), deletions (first column with darker orange indicating higher errors), and mismatches (center matrix with darker red indicating higher error). The U bases in the query sequence were adjusted to T automatically by the minimap program in order to map to the reference sequence which was DNA. The depth of coverage across the PRRSV genome was observed to be extremely uneven with higher coverage on the 3' end of the genome and gradually decreasing towards the 5' end, which agrees with what has been observed previously ( Figure 2 ) [44, 45] . This is not surprising since the sequence adapter was ligated to the poly(A) tail on the 3' end and this is where sequencing began. If the RNA was partially degraded or RNA second structure hampered the movement of the RNA through the nanopores, then only the 3' end would be sequenced, thus resulting in uneven coverage distribution. Despite the uneven coverage, the accuracy across the genome was similar, around 95%, with the middle region of the genome having a higher accuracy (97%) and the 3' end having the lowest accuracy (93%) (Figure 2 ). This was surprising since higher coverage can correct random sequencing errors and usually results in higher accuracy, which would suggest the 3' end would have a much higher accuracy instead of a lower accuracy. Such conflicts imply the existence of technological bias resulting in sequencing errors that cannot be corrected by depth of coverage. A similar observation showing a lower accuracy proximal to the 3' poly(A) tail has been observed previously due to the DNA adapter, which can partially explain poor accuracy at the 3' end [41, 59] .
agrees with what has been observed previously ( Figure 2 ) [44, 45] . This is not surprising since the sequence adapter was ligated to the poly(A) tail on the 3' end and this is where sequencing began. If the RNA was partially degraded or RNA second structure hampered the movement of the RNA through the nanopores, then only the 3' end would be sequenced, thus resulting in uneven coverage distribution. Despite the uneven coverage, the accuracy across the genome was similar, around 95%, with the middle region of the genome having a higher accuracy (97%) and the 3' end having the lowest accuracy (93%) (Figure 2 ). This was surprising since higher coverage can correct random sequencing errors and usually results in higher accuracy, which would suggest the 3' end would have a much higher accuracy instead of a lower accuracy. Such conflicts imply the existence of technological bias resulting in sequencing errors that cannot be corrected by depth of coverage. A similar observation showing a lower accuracy proximal to the 3' poly(A) tail has been observed previously due to the DNA adapter, which can partially explain poor accuracy at the 3' end [41, 59] .
Analytical sensitivity of MinION direct RNA sequencing was first evaluated by examining sequencing results over a range of sequence yields to determine the lowest sequencing yield at which the PRRS virus could be identified and at which a consensus genome could be generated. A range of sequence yields from 3 kilobases (kb) to 30,000 kb were obtained from the two whole genome sequencing runs above. Total reads were analyzed against a custom PRRSV database using BLASTn and the top match for each sequence yield, even those with only a few reads, was GenBank ID KC469618.1 (15,458 bp) . A 99.9% identity was observed between the known sequence of the VR2332 strain used in this experiment (GenBank ID EF536003.1, 15,182 bp) and the top BLAST match, KC469618.1, with an alignment length of 15,183 bp and only 15 base changes, suggesting they are basically the same isolate, especially since PRRSV has a high mutation rate estimated at (4.71-9.8) × 10 −2 / site/year [20] .
The length and accuracy of the longest reads, and generation of consensus sequences were further examined at the different sequence yields ( Table 2 ). As sequencing yield increased, the length of the longest reads obtained increased, as did the length of the consensus sequence, reaching a maximal level at a yield of 15,000 kb ( Table 2 ). The accuracy of the longest read at the different yields did not change. However, the accuracy of the consensus sequence increased from about 92% to 95%
Analytical sensitivity of MinION direct RNA sequencing was first evaluated by examining sequencing results over a range of sequence yields to determine the lowest sequencing yield at which the PRRS virus could be identified and at which a consensus genome could be generated. A range of sequence yields from 3 kilobases (kb) to 30,000 kb were obtained from the two whole genome sequencing runs above. Total reads were analyzed against a custom PRRSV database using BLASTn and the top match for each sequence yield, even those with only a few reads, was GenBank ID KC469618.1 (15,458 bp) . A 99.9% identity was observed between the known sequence of the VR2332 strain used in this experiment (GenBank ID EF536003.1, 15,182 bp) and the top BLAST match, KC469618.1, with an alignment length of 15,183 bp and only 15 base changes, suggesting they are basically the same isolate, especially since PRRSV has a high mutation rate estimated at (4.71-9.8) × 10 −2 / site/year [20] .
The length and accuracy of the longest reads, and generation of consensus sequences were further examined at the different sequence yields ( Table 2 ). As sequencing yield increased, the length of the longest reads obtained increased, as did the length of the consensus sequence, reaching a maximal level at a yield of 15,000 kb ( Table 2 ). The accuracy of the longest read at the different yields did not change. However, the accuracy of the consensus sequence increased from about 92% to 95% from 15 kb to 7500 kb input yield, due to the increased depth of coverage (Table 2) . Consensus accuracy generated from yields more than 7500 kb was consistently above 95% (Table 2) . A nearly full length, 15,101 bp in length (breadth of coverage 99.5%), PRRSV consensus genome sequence with a sequence accuracy of 95.2%, was generated from a sequence yield of 15 mb ( Table 2 ). The minimal sequencing yield required for accurate PRRSV strain detection was found to be 3 kb (~5 reads) ( Table 2) . A total sequencing yield of 15 mb (~6 × 10 4 reads) allowed for accurate whole PRRSV genome generation (Table 2 ).
The high amounts of viral RNA used for evaluation of MinION sequencing yields above are unrealistic and do not represent amounts of virus that can be found in field samples. Thus, analytical sensitivity was next examined using samples with a more realistic amount of viral copies present. A total of 5 lower concentration cell culture samples, 3 serum samples with known amounts of virus spiked-in, and 6 clinical samples containing varying amounts of virus were sequenced. The total number of viral copies that were used for each MinION sequencing reaction was determined using RT-qPCR, with a range of 3.2 × 10 4 to 5.9 × 10 9 viral copies per sequencing reaction in these samples ( Table 3 ). The PRRSV strain was determined by analyzing total raw reads from sequencing against the custom PRRSV database and the top BLAST match was used to identify the viral strain present in the sample (Table 3) . MinION sequencing was able to detect PRRSV in spike-in samples containing as low as 3.4 × 10 4 viral copies and in clinical samples at 3.8 × 10 6 viral copies ( Table 3 ). The analytical sensitivity difference related to sample type was unexpected, but, in fact, reasonable. One possible reason for this sensitivity difference could be related to viral RNA quality. Viral RNA extracted from cell culture supernatants are produced cleanly in a lab and are quickly stored properly to minimize viral and RNA degradation, thus giving higher quality samples. Clinical samples, on the other hand, are usually obtained on farm and the subsequent handling, shipping, and storage of clinical samples will inevitably increase viral and RNA degradation and decrease sample quality, resulting in lower sequencing yields, while RT-qPCR, which is less sensitive to these conditions, can still detect the presence of the virus [60] . The detection accuracy of the raw PRRSV reads was determined by comparing the top BLAST hit to the known ORF5 sequence and/or whole genome sequence (Table 3) . For cell supernatant and spike-in samples, the detection accuracy remains almost the same even as the viral copy number increased from an order of 10 4 to 10 9 , and the top hits all showed more than a 99% identity to the reference whole genome sequence (Table 3) . For clinical samples, at least 3.8 × 10 6 viral copies were needed in order to detect viral sequence (Table 3) . At 3.8 × 10 6 viral copies the detection accuracy, comparing the top BLAST hit to the known ORF5 sequence, was 94%, increasing to 97% as the number of viral copies increased (Table 3) .
PRRSV consensus sequences were obtained from each of the samples, if possible, in order to evaluate the ability of DRS to generate accurate consensus sequence from low viral copy samples (Table 3) . MinION sequencing produced a large number of total raw reads, most of which were from the added exogenous cellular RNA necessary for successful library preparation. The desired PRRSV reads were obtained through mapping raw reads against the custom PRRSV database and using those that matched to generate a consensus sequence. A consensus sequence was not able to be obtained for 2 of the samples (spike-in 3.4 × 10 4 viral copies and clinical 3.8 × 10 6 viral copies) because of the low number of PRRSV reads present, so the longest PRRSV read was used for accuracy analysis instead of a consensus sequence ( Table 3 ). The accuracy of the consensus sequence (or longest PRRSV read) was determined by comparing it to the known whole genome and/or ORF5 sequence (Table 3) . Not surprisingly, there was a general trend that longer and more accurate consensus sequences were generated when more viral copies were sequenced, with slight fluctuations due to variations in sequencer performance (Table 3) . Notably, a basically full-length genome with a consensus accuracy of 93.0% was observed in the spike-in sample containing 1.5 × 10 9 viral copies ( Table 3 ). The other three samples in which more than 10 9 viral copies were used as the input sample were also able to generate a consensus genome with an accuracy higher than 93%, but were not full-length genomes, perhaps due to the low number of PRRSV reads (and total reads) even though the percent of PRRSV reads per total reads was higher in these samples. Thus, more than 10 9 viral copies with perhaps 1500 PRRSV reads are recommended if the goal is to obtain a full-length genome sequence, but if identification of the viral strain involved in infection is all that is needed, then clinical serum samples need only have 10 6 -10 7 viral copies to be successful (Table 3 ).
A comparison between the number of viral copies and the number of viral reads from sequencing was performed to determine if there was a quantitative relationship between input PRRSV RNA amounts and output PRRSV sequencing reads. Of note, the total raw reads varied greatly even though the same amount of total RNA was used for library preparation (Table 3) , which was mainly due to the variation of flow cell performance, such as available pores. In order to normalize the comparison, the ratio of PRRSV reads to total reads was calculated and compared to the input viral copies and a strong positive correlation (r 2 = 0.88) was observed. This preliminary result suggests that the knowledge of the number of viral copies in a sample can predict the approximate number of raw reads that will be obtained after sequencing allowing for more successful sequencing results and the number of reads obtained from sequencing can be used to estimate the number of viral copies present in a sample.
In swine farms, PRRS outbreaks can occur even in herds that are vaccinated, therefore it is necessary to be able to differentiate the presence of infectious field strains from vaccine to aid in outbreak investigation [13, 14, 16] . To address this issue, we explored the use of MinION DRS for detection of multiple PRRSV strains in the same sample using a stepwise BLAST approach. Samples were created that contained the VR2332 strain (parental strain to the type 2 PRRSV MLV vaccine) to represent vaccine, and either a type 1 PRRSV strain (SDEU, 61.4% similarity with VR2332) or another type 2 PRRSV strain (1-7-4, 82.4% similarity with VR2332). After sequencing, PRRSV reads were extracted from total reads. PRRSV reads were BLAST analyzed against the custom PRRSV database to identify the top match strain and all PRRSV reads that were able to map to this strain were obtained. The unmapped reads were then BLAST analyzed a second time against the custom PRRSV database to identify the top match of these remaining sequences and they were then mapped to this second top match. If unmapped sequences remained, this pipeline was repeated to identify more than 2 PRRSV strains present in the sample. Results showed that even at a total sequence yield of 30 kb (20-26 PRRSV reads), MinION sequencing was able to identify a PRRSV strain with >99.9% identity to the input VR2332 strain ( Table 4 ). The control samples did not identify a second PRRSV sequence present (at any sequence yield) which was promising, since VR2332 was the only virus present. In the mixed virus samples, the second viral strain was not detectable at a total yield of 30 kb. However, at 300 kb or higher yields (245 or more PRRSV reads), the second strain could be identified with an accuracy >99.8% (Table 4 ). Thus, if enough virus is present from both strains, they could be successfully detected in a single sample. Interestingly, in the VR2332 + 1-7-4 sample, SDEU sequences were also detected, which was not expected since that strain was not present in the sample. Previously, others have observed between-run carryover contamination on the same MinION flowcell [61, 62] . Our observation also indicates the carryover contamination from our previous VR2332 + SDEU sample sequencing. This reiterates the need for effective washing of flow cells, as well as good records of what was run on each flow cell previously, especially if flow cells are used for diagnostics. Further investigation into the SDEU carry over contamination showed that SDEU reads were consistently generated during the entire sequencing run, thus contaminating reads could not be minimized by removing the first few minutes of sequencing, they contaminated the entire sequencing run. Although this experiment was designed to differentiate field strains from vaccine strain, it can be applied to the investigation of multiple co-infection strains. Since the identification of the strains present is based on the top BLAST match, any strain with a known genome or similar genome to one in the database could be identified. If no similar strains are present in the database there should be a higher than usual percent of unmapped reads indicating a problem with the BLAST match parameters. The strains examined here were present in equal amounts and had at least an 82.4% identity. Further investigation of strains at different ratios and with higher identity to each other needs to be examined to determine if they would both be able to be distinguished, but with an adjustment of the minimap parameters used to map reads to the top BLAST hit, they should be able to be observed.
From this study we also noticed that the percentage of PRRSV reads that mapped to the first BLAST hit could be used as an indicator for the presence of other PRRSV strains ( Table 4 ). The samples that only contained VR2332 had >98% of PRRSV reads mapping to VR2332, while in the mixed strain samples less than 85% of the PRRSV reads mapped to the first BLAST match, VR2332 (Table 4 ).
PRRSV has been a severe threat to the swine industry worldwide ever since it was first described in the late 1980s [63] . Control of PRRSV is difficult, but important for animal welfare and swine production, where the development and implementation of reliable, accurate, and rapid diagnostic methods play a key role. Several methods have been developed and applied to PRRSV diagnosis, which are well described by Ko et al. [64] . Currently, PRRSV diagnostics mainly includes anti-PRRSV antibody detection by serological testing and nucleic acid detection using PCR based assays. Sequencing of PRRSV began in the mid-1990s, to discriminate between strains, which mainly focused on open reading frame 5 (ORF5) or other short regions of interest, but rarely encompassed the complete genome due to technological and monetary limitations [65, 66] . PRRSV ORF5 shows extensive genetic diversity and has been used for providing insight into PRRSV epidemiology, however it is only 5% of the whole genome, thus 95% of the genomic information remains for prediction of genetic variation. Whole genome sequencing is greatly needed to provide a more complete picture of the virus [67, 68] , which is now gradually becoming more feasible with the rapid development and innovation of new sequencing technologies [69, 70] . Oxford Nanopore direct RNA sequencing (DRS) is revolutionary for sequencing RNA viral genomes, since it can sequence the RNA directly, allowing for detection of methylation sites and decreasing bias inherent in reverse transcription and PCR amplification of samples prior to sequencing, and it can generate long reads, allowing for the elucidation of recombination events [71] .
This study was planned and performed to assess the feasibility of Oxford Nanopore MinION DRS in clinical PRRSV diagnostics to identify the viral strains involved in infection. The key interests addressed in this study included whether sequencing can detect PRRSV strains to identify an outbreak as occurring due to the introduction of a new strain or recirculation of a previous outbreak, whether sequencing can generate whole genome information to aid in further understanding of PRRSV epidemiology, and whether sequencing can detect and differentiate multiple strains in a single sample to investigate outbreaks that occur in vaccinated herds or co-infection of multiple field-strains. Previously, PRRSV whole genomes have been generated using Sanger and Illumina sequencing technologies [10, 47, 72] . While both sequencing technologies can generate a whole PRRSV genome with more than 99.9% accuracy, the raw reads produced are usually less than 1500 bp. As a result, in order to generate a PRRSV whole genome, multiple primer sets and multiple individual sequencing reactions are needed for Sanger sequencing which is labor and time consuming; or for Illumina, computing resource intensive genome assembly is needed which requires time and knowledge to perform effectively. Oxford Nanopore MinION sequencing, on the other hand, can generate ultra-long raw reads which are in theory only limited by input fragment length [73] . This feature is beneficial, since it saves time and effort when generating a whole genome sequence. In this study, we successfully generated PRRSV raw reads up to the length of the entire genome (15 kb) with an approximate 86% identity to the known input genome sequence. A bioinformatics approach was developed that used the longest raw read as a scaffold to effectively generate a consensus sequence, improving the accuracy to 96% identity of the input genome.
Sequencing can be incorporated as a supportive tool for PCR to aid in diagnostic strain level PRRSV detection. It has been reported that both Sanger and Illumina sequencing can accurately detect PRRSV strains present in a sample, but both require transcription of RNA into cDNA followed by PCR amplification prior to sequencing [10, 72] . Differing from this, MinION technology directly sequences RNA strands for detection of PRRSV strains. This feature is beneficial since no reverse transcription or PCR are needed thus eliminating biases that those introduce and saving time since extra steps need not be performed, which allows for same day disease investigation. Moreover, direct RNA sequencing allows for the detection of nucleotide analogs which have been correlated with numerous diseases [74] . Most importantly, the MinION sequencer is cost-effective and easily accessible, without the investment of expensive sequencing and bioinformatics infrastructure. Despite the low raw read accuracy of direct RNA sequencing (~86%), which is the main concern with this technology, PRRSV strains were identified with 99.9% accuracy using as few as 5 raw reads (3 kb total yield). This accurate strain-level detection, even though the sequence accuracy is low, allows for guidance on determining effective control methods due to the precise detection of the circulating strains on a farm. Now knowing the potential of DRS for strain level detection of pathogens as determined through this study as well as others [75] , we next investigated the analytical sensitivity of PRRSV detection to determine its usefulness for obtaining reliable sequencing results. Previous research examining analytical sensitivity of next-generation sequencing has reported sensitivities that are similar or less sensitive than RT-qPCR [28, 76] , and the third-generation Oxford Nanopore DRS has previously shown a sensitivity of 1.89 × 10 7 viral copies in an influenza virus study [44] . Our results indicated that samples with a minimum of 10 4 to 10 6 viral copies, depending on the sample type, can be successfully sequenced to accurately identify strains after about 6 hours of sequencing. Although DRS is not as sensitive as PCR for use as a diagnostic tool identifying viral presence [77, 78] , it can be used for further investigation of the strain causing an outbreak, either directly from high viral load samples or following amplification of virus in cell culture. Additionally, a very strong correlation was observed between the number of viral reads generated through sequencing and the starting number of viral copies, indicating sequencing reads can be predicted by viral copies in a sample and vice versa, which has been confirmed by other studies as well [28] . Interestingly, the observation that the sensitivity of sequencing was higher from cell culture virus spiked into serum as opposed to clinical serum samples suggests that sample handling or perhaps the quality of the sample was an important factor for sequencing sensitivity [79] , thus emphasizing the importance of careful handling, transporting, and storing of clinical samples to protect the viral RNA from degradation [80, 81] . This also suggests that on-site sequencing of samples as opposed to a centralized diagnostic system may allow for higher sensitivity of detection due to the ability to immediately process samples after sampling.
In addition to a single strain infection, clinical situations have been shown to be more complicated, sometimes involving infection with multiple strains simultaneously, such as co-infection of multiple field strains or co-existence of field strain(s) with vaccine strain [15, 82] . This not only poses challenges to disease diagnosis but also increases the chance of PRRSV recombination, which is considered to be one of the most important mechanisms in PRRSV evolution [10, 83] . In order to address this issue, Oxford Nanopore DRS was evaluated to determine if it could be used to discriminate co-infection by two PRRSV strains from different genotypes (61.4% similarity) as well as from the same genotype (82.4% similarity) in a single sample. In fact, the strains were easily differentiated, and the same method could be used to identify more than 2 strains present in a single sample.
This study begins the process of developing rapid and high-resolution PRRSV diagnostics for use in clinical situations where genomic data is urgently needed. This includes situations of potential infection, outbreak investigation, vaccine design guidance, and producer desires for more specific information. The PRRSV RNA genetic material presents the same technical demands for extraction, processing, and sequencing as do influenza virus, coronaviruses, picornaviruses, rotaviruses, and many foreign animal disease viruses for which rapid pathogen identification and discrimination can be critically important. Knowledge gained from PRRSV in this study can be immediately translatable to aid in rapid diagnostic detection and strain-specific identification of an entire class of important swine pathogens. In fact, MinION sequencing technology might end up being a useful and affordable diagnostic tool for swine veterinary medicine in general. This technology can provide a complete readout of RNA viruses and RNAs from the host or other pathogens present in a sample without the need for pre-existing knowledge of what might be present [84] .
The current evaluation of this sequencing technology indicates that it can be used successfully along with qPCR for diagnosis of a pathogen, whole genome generation, strain-level pathogen detection and differentiation. As the DRS technology continues to develop and RNA isolations are optimized for use outside of a research laboratory, these methods can be further refined and optimized using updated materials and protocols. The future goal is to realize on-site infectious disease investigation using the Oxford Nanopore MinION portable sequencer to allow for quicker diagnosis and facilitation of more rapid decision-making, an important consideration in an industry in which delays in moving animals due to unknown health status can disrupt flow patterns and schedules, or cause disease outbreaks with great economic losses.
|
Enterovirus 71 (EV71), a member of the Picornaviridae family, is a non-enveloped singlestranded RNA virus. It is the major causative agent of repeated outbreaks of hand, foot and mouth disease (HFMD) [ 1 , 2 ] . In severe cases, especially those among in infants and children, EV71 infection causes severe neurological complications such as aseptic meningitis, brain stem encephalitis, pulmonary edema, poliomyelitislike paralysis and eventual death. In the last decade, the continuous outbreaks of EV71 in Asia-Pacifi c region have caused considerable deaths [ 3 ] . More than 7 million HFMD cases were reported in China between 2008 and 2012, of which 2457 were fatal [ 4 , 5 ] . No effective antiviral drug is currently used for treating EV71 infection [ 6 ] ; therefore, there is an urgent need for effective agent against EV71 infection.
Chinese Medical herbs are a great reservoir of active compounds against microbial infections [ 7 -14 ] . Plants generate a great deal of compounds to eliminate or limit microbe invasions. The family Guttiferae contains more than 450 species distributed over regions mostly in Asia, southern Africa and Western Polynesia. Many bioactive compounds, such as prenylated xanthenes, benzophenones, bifl avonoids, and polycyclic polyprenylated acylphloroglucinols, have been isolated from different family members [ 15 -23 ] . They exhibit various biological activities including antibacterial, antifungal, antioxidant, anti-infl ammatory and anticancer effects [ 15 -23 ] , despite the underlining mechanisms being poorly understood.
With the growing number of natural antiviral compounds identifi ed, a promising option would be to fi nd new compounds from medical herbs to combat EV71 infection [ 24 -26 ] . Instructed by bioactivity-guided isolation, a new isoprenyl benzophenone derivative Oblongifolin M (compound M from Oblongifolia , OM), has been isolated from medical herb Garcinia oblongifolia [ 27 ] . In this study, we investigated its antiviral activity against EV71 infection and the underlying mechanisms through comparable proteomics studies.
The antiviral effect of OM was tested in Rhabdomyosarcoma (RD) cells by CPE assays. As shown in Figure 1A , the RD cells without viral infection were fl at with spindle-like sharps and attached well on the surface of culture dishes. When cells were infected with EV71 at a multiple of infection (MOI) of 1, many cells showed CPE. They became round, detached from the surface of culture dishes and fl oated away 12 hours post-infection (p.i.) ( Figure 1B ). When the cells were pretreated with 15 μM of OM 4 hours prior to EV71 infection, most cells were healthy with only a few exhibiting CPE ( Figure 1C ), an indication that the infected cells were signifi cantly protected from CPE. Surprisingly, almost all EV71infected cells were healthy when pretreated with 30 μM of OM ( Figure 1D ). Similar results were also obtained from HEK 293 cells and Hela cells (data not shown). The reduction of EV71-induced CPE by 50% (IC 50 ) was determined byusing GraphPad Prism5. As shown in Table 1 , the IC 50 of OM was 2.38±0.79 μM. Cell viability was employed to determine the toxicity of OM in RD cells by MTT (3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2-Htetrazolium bromide) assays after treating RD cells with OM for 24h. Compared with untreated cells, cell viability was not obviously affected by OM at a concentration of 50 μM. The CC 50 (concentration of OM required for 50% cell kill) of the uninfected cells was 83.87±1.12 μM. The selectivity of OM on RD cells was 35.24 ( Table 1 ) , indicating OM's potential to be an effective antiviral agent against EV71 infection. . E. OM inhibited EV71 reproduction. The virions in the culture supernatant were harvested 12 hours p.i. and the viral titer was determined by TCID 50 assays. F. OM inhibited EV71 replication. 12 hous p.i., the intracellular RNA was isolated, and viral genomic RNA (primers targeting VP1 gene) and cellular mRNA of GAPDH genes were quantitated by qRT-PCR. The viral genomic RNA level was normalized with the copy number of mRNA of GAPHD. The mean value of VP1/GAPDH ratio was set as 1 in control. G. The viral VP1 protein was decreased by OM in a dose dependent manner. The cells were harvested 12 hours p.i. at MOI of 1, and cell lysate was applied for Western blot assays.
To further determine OM's antiviral potency, we titrated the viral titers in the culture media. As shown in Figure 1E , the mean value of viral titers was 5.6 × 10 6 pfu/ ml in the control group. After treatment of the cells with 15 μM of OM, the viral titers signifi cantly decreased to 3.42 × 10 6 ± 6.4 × 10 5 pfu/ml. When the concentration of OM was increased to 30 μM, surprisingly, the viral titers dramatically dropped by over 90% to 2.5 × 10 5 ± 1.1 × 10 5 pfu/ml. The antiviral effects were further validated by measuring the intracellular viral genomic RNA copies and viral protein levels. Twelve hours after infection, the copy number of viral genomic RNA decreased by 23%, 65%, 84% and 95% after cells were treated with OM at concentrations of 15, 20, 25, and 30 μM, respectively ( Figure 1F ). Consistently, the levels of intracellular VP1 protein were signifi cantly inhibited by OM in a dose dependent manner ( Figure 1G ).
To investigate the potential antiviral mechanisms of OM against EV71 infection, we employed comparable proteomics studies. Proteins from RD cells treated with DMSO (control) or 30 μM of OM were extracted and resolved by 2-DE analysis 48 hours post-treatment. Figure 2 shows a representative pair of silver-stained 2-DE maps between two samples from three pairs of gels ( Figure 2A and 2B ). After comparison, 52 spots showed signifi cant (with over 2-fold, p < 0.05) difference. The individual protein spot was extracted and digested for MALD-TOF MS and MS/MS analysis. Typical images of fi ve paired spots were cropped and enlarged as shown in Figure 2C . We successfully identifi ed 18 proteins with signifi cant differential expression. Among them, eight proteins were markedly up-regulated whereas ten were down-regulated in the OM-treated RD cells as compared to the control group. The characteristics of proteins (including name, abbreviation, NCBI accession number, theoretical molecular mass, pI, peptide count, protein score (confi dence interval in percent), number of unmatched masses, sequence coverage, fold change, and functions) are listed in Table 2 . Functionally, they can be categorized into six categories: structural molecular activity (17%), metabolism (39%), cytoskeleton and transport proteins (11%), gene transcription associated proteins (6%), heat shock proteins and chaperones (17%), and cell proliferation, metastasis, and signal transduction proteins (11%). a Spot numbers are shown in Figure 2 . b The spot intensities were quantifi ed using PDQuest software (Bio-Rad). The average -fold change of spot intensity for each protein was calculated from three independent experiments (DMSO-treated RD versus OM-treated RD). ↑, increase; ↓, decrease. www.impactjournals.com/oncotarget
We carried out Western blot assays to confi rm the differentially expressed protein levels after we treated the RD cells with OM for 48 hours. Among fi ve selected candidate proteins, four displayed similar expression patterns as shown in the 2-DE assays. The protein levels of ERp57, RCN-1, RCN-3, and CALU signifi cantly decreased after OM treatment ( Figure 3A ).
ERp57 drew our particular attention because it is crucial for the entry of SV40 and murine rotavirus [ 28 , 29 ] . To further observe the correlation between the expression levels of ERp57 and EV71 viral proteins, RD cells were treated with or without 30 μM of OM for 4 hours, and then infected with EV71 (MOI 1). As shown in Figure 3B , OM not only suppressed ERp57 expression, but also signifi cantly decreased viral VP1 and VP2 levels. These results indicated that ERp57 might involve the life cycle of EV71.
We tested the effects of ERp57 knockdown on EV71 infections. As shown in Figure 4A , both si-ERp57-1 and ERp57-2 effectively reduced the mRNA level of ERp57. As si-ERp57-1 was more effective on knocking down ERp57, si-ERp57-1 was chosen for further experiments in our study. Consistent with the reduction of mRNA level, ERp57 protein level was also signifi cantly decreased by si-ERp57-1 ( Figure 4B ). We further examined if knockdown of ERp57 would affect virus entry. RD cells were fi rst transfected with either scrambled or si-ERp57-1 for 48 hours, and then infected with EV71 at MOI of 20 for 1 hour on ice. The infected cells were washed with PBS for three times to remove potential free viruses that had not entered the cells. We employed qRT-PCR to measure the intracellular viral RNA levels. Our data showed that the entry of EV71 had not been affected by ERp57 knockdown ( Figure 4C ). To further investigate if ERp57 knockdown would affect EV71 replication or reproduction, we fi rst measured the intracellular viral RNA levels at different time points. We showed that ERp57 knockdown signifi cantly decreased the intracellular viral RNA levels at 3 and 5 hours p.i. ( Figure 4D ), and markedly reduced the secreted virions in the culture media collected at 5 and 7 hours p.i. ( Figure 4E ). These results indicate that ERp57 was involved in the early phase of viral life cycle.
Our previous study showed that active replication of EV71 occurs just after uncoating (3 to 6 hours p.i.) [ 30 ] . After uncoating, the fi rst and key event is viral protein translation driven by IRES [ 31 ] . Therefore, we hypothesized that OM may inhibit EV71 infection by down-regulation of IRES activity through ERp57. To test this hypothesis, we generated two constructs to express reporters Renilla luciferase (RLuc) and Firefl y luciferase (FLuc). As shown in Figure 5A , two continuous stop codes (TGA TGA) were inserted between RLuc and FLuc were harvested and applied for Western blot assays. As shown, ERp57, RCN-1, RCN-2 and Calumenin were markedly down-regulated by OM treatment. B. Cells were pretreated with 30 μM of OM for 4 hours, and then infected with EV71 at MOI of 1 for 12 hours. As shown, OM suppressed the expression of ERp57, and viral proteins VP1 and VP2. The relative density value of each band to GAPHD was set as 1 in the control group. genes in pRF plasmid; while the IRES sequence of EV71 was inserted between RLuc and FLuc genes in pIRES plasmids. In both constructs, the CMV promoter drives RLuc expression. FLuc translation is stopped when mRNA is transcribed from pRF, whereas the translation would be successful from that of pIRES. Hence, the IRES activity can be measured and expressed by the ratio of FLuc to RLuc activities.
We showed that OM did not affect the ratio of FLuc/ RLuc when cells were transfected with pRF plasmid; however, the ratio of FLuc/RLuc decreased after cells were transfected with pIRES and treated with OM. The inhibition of IRES activity was in a dose-dependent manner ( Figure 5B ). As expected, when ERp57 was knocked down, the IRES activity of EV71 markedly decreased ( Figure 5C ), correlating with the inhibitory effects of OM.
We constructed an expression plasmid to express human ERp57 in HEK 293 cells. As shown in Figure 6A , the ectopic expression of ERp57 signifi cantly stimulated the IRES activity of EV71. Furthermore, we examined if ectopic expression of ERp57 would restore the viral replication that had been inhibited by OM. ERp57 was ectopically expressed in RD cells and pretreated with OM at 30 μM for 4 hours. The cells were then infected with EV71 at MOI 1. We showed that the viral RNA level decreased to 25% that of the control 5 hours p.i., whereas the viral RNA level was restored to that 53% of the control upon overexpression of ERp57. Our results demonstrated that ERp57 partly restored the IRES activity suppressed by OM. OM inhibited IRES activity in a dose dependent manner. Cells were transfected with pIRES or pRF for 24 hours, and then treated without or with OM for 12 hour followed by luciferase assays. C. Knockdown of ERp57 reduced IRES activity. Cells were transfected with scramble siRNA or si-ERp57-1. Twenty four hours post-transfection, the cells were then transfected with pIRES or pRF for another 24 hours and applied for luciferase assays. The ratio of FLuc to RLuc activity was set as 1 when cells were transfected with pRL reporter plasmid in the control group (without OM treatment or transfected with scrambled siRNA). *, p < 0.05; ***, p < 0.001.
There is no effective drug to treat EV71 infection [ 31 -33 ] . In this paper, we showed that an effective compound OM isolated from Garcinia oblongifolia displayed potent anti-EV71 activitythrough downregulation of ERp57, an important host factor in the early phase of EV71 life cycle.
Host proteins are ideal targets for antiviral drug development because targeting host proteins would avoid or delay the occurrence of drug resistance due to fast mutagenesis of viruses. ERp57, a chaperon protein involved in catalyzing bisulfate bond exchange for correct protein folding in ER stress, has been shown a crucial role in the entry of DNA virus SV40 [ 28 ] . Besides virus entry, it is possible that ERp57 may exhibit other functions in the life cycle of other viruses. As an RNA virus, EV71 has a life cycle that is completely different from that of SV40. An important question is whether ERp57 exhibits a similar function in the life cycle of EV71. To address this question, we fi rst investigated if the viral entry of EV71 would be affected by knockdown of ERp57. To our surprise, the depletion of ERp57 did not affect virus entry into the host cells ( Figure 4C ). Interestingly, we found that the virus replication was signifi cantly suppressed at 3 to 5 hours p.i. after ERp57 knockdown in the host RD cells ( Figure 4D ) . Accordingly, the secreted virions evidently decreased at 5 to 7 hours p.i. ( Figure 4E ). In our previous study, we have shown that EV71 undergoes the fast replication phase from 3 to 6 hours p.i. in RD cells, and carries out fast package from 6 to 9 hours p.i. (package phase) and secretion initiation [ 30 ] . Our results suggested that ERp57 involves mainly in the early phase of viral life cycle.
The fi rst event of viral activity is IRES-mediated translation in generating proteins required for viral RNA replication after uncoating of virus in the host cells. We hypothesized that ERp57 may affect viral translation through regulating viral IRES activities. Our results showed that knockdown of ERp57 signifi cantly reduced the viral IRES activities ( Figure 5B ). Intriguingly, OM also suppressed the IRES activity in a dose dependent manner ( Figure 5C ), demonstrating a correlation between drug effects and ERp57 functions. To further validate our fi ndings, we conducted gain-of-function studies. ERp57 was ectopically expressed in host cells to test its effects on IRES activity. As expected, results from this study exhibited an increase of IRES activity by ectopic ERp57 expression ( Figure 6A ). This suggests that OM may affect ERp57 functions in an indirect or direct manner. We have noted that ectopic expression of ERp57 partly rescued the inhibition of early replication of EV71 ( Figure 6B ), indicating that in addition to ERp57, there must be other host factors contributing to the inhibitory effects of OM on EV71 reproduction. As shown in Table II , 10 proteins were down-regulated and 8 proteins were upregulated by OM. Whether these proteins contribute to the viral inhibition of OM should be investigated in future studies. Although several other compounds isolated from Garcinia oblongifolia (e.g., Oblongifolia L or P) share similar structure to OM, it antiviral effects against EV71 was much weaker (data not shown). Further investigating the underling mechanism may help us to develop OM derivatives with more potent anti-EV71 activity.
In summary, we showed that OM, an active compound isolated from a traditional Chinese herb, potently inhibited EV71 infection. By employing comparative proteomics studies, we identifi ed that ERp57 was an effector of OM. Knockdown of ERp57 inhibited viral replication through downregulating viral IRES activities, whereas ectopic expression of ERp57 increased IRES activity and partly rescued the inhibition of viral replication by OM. Our results demonstrated that OM inhibited the early replication of EV71 partly through downregulating ERp57. OM could potentially be further developed into a therapeutic drug for treating EV71 infections, and ERp57 may serve as a target for developing host-based antiviral drugs against EV71 infection.
Chemicals were purchased from Sigma (St. Louis, MO, USA). Oblongifolin M (OM) was previously identifi ed and isolated with purity over 98% [ 27 ] . Commercially available antibodies were purchased from different companies. Antibody against VP1 was from Abnova
Rhabdomyosarcoma cells (RD, ATCC accession no. CCL-136) were maintained in Dulbecco's modifi ed Eagle's medium (DMEM) containing 10% (v/v) fetal bovine serum (FBS, HyClone) with 100 U/ml penicillin and 100 μg/ml streptomycin, at a humidifi ed condition of 5% CO 2 at 37 °C. EV71 (SHZH98 strain; GenBank accession number AF302996.1) [ 30 , 44 , 45 ] was propagated on 90% confl uent monolayer cells in DMEM with 2% FBS. When about 80% of the cells exhibited CPE, culture fl uid was collected, centrifuged, fi ltrated and stored in -80°C until use. The viral titer was determined by the end-point dilution assay of median tissue culture infective dose (TCID 50 ). www.impactjournals.com/oncotarget CPE assay Antiviral activity of OM was evaluated by CPE assays. Approximately 20,000 RD cells per well were laid in 96-well plate for 24 hours at 37°C to reach monolayer, then treated with OM at different concentrations and infected by EV71 at MOI of 0.01 or 1 (Supplementary Table S1 ). CPE was monitored from time to time under a phasecontrast microscope and recorded by a CCD camera. The concentration required for the tested compound to reduce the EV71-induced CPE by 50% (IC 50 ) was determined by using the Forecast function of Microsoft Excel for all experiments. Data were shown as mean values with standard deviations from three independent assays. Selectivity index (SI) is calculated by the ratio of CC 50 to IC 50 .
Cytoxicity of OM was evaluated by cell viability assays. RD cells (about 20,000 cells) were set in each well of a 96-well plate overnight to reach monolayer and exposed to serial dilution of OM for 24 hours. Cell viability was measured by the MTT method as described previously [ 46 , 47 ] . The CC 50 was calculated.
Cells were lysed in radioimmunoprecipitation assay (RIPA) buffer (50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 1mM EDTA, 1% Triton X-100, 0.1% SDS, 1×Roche protease inhibitor cocktail) with occasional vortex. The cell lysates were then centrifuged to remove debris at 14,000 rpm for 20 min at 4°C. The concentration of proteins in the lysates was determined by Bradford assay (Bio-Rad). Equal amounts of total protein for each sample was loaded and separated by 8% to 12% SDS-PAGE and then transferred onto polyvinylidene difl uoride (PVDF) membranes (Amersham Biosciences). Membranes were blocked with 5% skim milk in TBST (20 mM Tris-HCl, pH 7.4, 150 mM NaCl, 0.1% Tween 20) for 1 h and incubated with specifi c antibodies. GAPDH was served as the loading control. Target proteins were detected with corresponding secondary antibodies (Santa Cruz Biotechnology), visualized with a chemiluminescence detection system (Amersham Biosciences). Each immunoblot assay was carried out at least three times as previously reported [ 44 ] .
The total cellular RNA was extracted using TRIzol reagent (Invitrogen) according to the manufacturer's instructions. The virion RNA in the culture media was isolated using a viral RNA isolation kit (Qiagen). RNA was than reverse transcribed into cDNA using a reverse transcription system (Promega). Quantitative reverse transcription-PCR (qRT-PCR) was carried out by using an ABI 7500 Real-Time PCR system with SYBR Premix Ex Taq (Takara). The PCR was set up under the following thermal cycling conditions: 95°C for 30 s, followed by 40 cycles of 95°C for 5 s and 60°C for 34 s. The threshold cycle (CT) value was normalized to that of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) [ 47 , 48 ] . The qRT-PCR was performed by using the following primer pairs: GAPDH, 5′-GATTCCACCCATGGCAAATTCCA-3' (forward) and 5'-TGGTGATGGGATTTCCATTGATGA-3' (reverse); EV71, 5'-GCAGCCCAAAAGAACTTCAC-3'and 5'-ATTTCAGCAGCTTGGAGTGC-3'; for ERP57, 5'-GTGCTAGAACTCACGGACGA-3' and 5'-GCTGCAGCTTCATACTCAGG-3'. All samples were run in triplicates, and the experiment was repeated three times. The relative mRNA level of each target gene was expressed as fold change relative to the value of the corresponding control.
RD cells were treated with 30 μM of OM (in DMSO) or an equal amount of DMSO for 48 hours. The cells were washed with wash buffer (10 mM Tris, 250 mM sucrose, pH 7.0) six times to remove salt ions thoroughly, lysed with lysis buffer (8 M urea, 2 M thiourea, 2% CHAPS, 1%Nonidet P-40, 2 mM tributylphosphine, 1× Roche Applied Science protease inhibitor mixture, 1× nuclease mixture, 1 mM PMSF, 2% IPG buffer), and then left on ice for 45 min with occasional vortex. The lysates were centrifuged (14,000 g for 15 min) at 4 °C, the supernatants were collected and stored at -80 °C until use. Protein concentrations were measured by Bradford assay (Bio-Rad). Isoelectric focusing was conducted with 13-cm precast IPG strips (pH 4-7, linear; GE Healthcare) by an Ettan IPGphor II IEF System (Amersham Biosciences). The IPG strips containing 150 μg of protein samples were rehydrated for 10 h at 30 V with 250 μl rehydration buffer (8 M urea, 2% CHAPS, 0.4% DTT, 0.5% IPG buffer, 0.002% bromphenol blue). We then focused the rehydrated strips using a stepwise voltage increment program: 500 and 1000 V for 1 h each and 8000 V afterward until 64 kV-h. After IEF, we incubated the isoelectrically focused strips in an equilibration buffer (6 M urea, 1% DTT, 2% SDS, 30% glycerol, 0.002% bromphenol blue, 50 mM Tris-HCl, pH 6.8) for 15 min with gentle agitation, then incubated the strips for another 15 min in the same buffer containing 2.5% iodoacetamide without DTT. Then we loaded the equilibrated strips onto 12.5% SDS-polyacrylamide gels, run at 15 mA/gel for 30 min and then 30 mA/gel until the dye fronts reached the bottoms of the gels. We stained the gels by modifi ed silver staining, which was compatible with MS analysis. Gels were scanned with a calibrated GS-800 scanner (Bio-Rad), and the intensity of spots was calculated and compared by using Quantity One and PDQuest 2-D analysis software (version 8.0; Bio-Rad) as described previously [ 35 ] . A 2-fold increase/decrease (DMSO-treated versus OM-treated RD cells) of spot intensities was set as the threshold for indicating signifi cant changes.
Selected spots were manually cut from the gels, destained, washed and dried completely by vacuum and digested with 10 μg/ml trypsin (Promega) in 25 mM ammonium bicarbonate, pH 8.0 for 16-18 h at 37 °C. The supernatants containing tryptic peptides were collected. We mixed 2 μl of peptide solution with 0.6 μl of matrix (6 mg/ml α-cyano-4-hydroxylcinnamic acid in 45% ACN and 0.2% TFA) before spotting onto the MALDI plate. The samples were applied for mass spectrometric analyses by using a MALDI-TOF/TOF tandem mass spectrometer Ultrafl ex III proteomics analyzer (Bruker Daltonics). We obtained mass spectra from 2000 laser shots with an accelerating voltage of 20 kV with mass ranges of 700-4000 m/z in a positive ion refl ectron mode and mass errors of less than 100 ppm by using the FlexControl (version 3.3.108.0, Bruker Daltonics). We obtained MS/MS spectra by collecting 3000 laser shots with a default calibration. The detected MS/MS peaks were set on a minimum S/N ratio ≥3 and cluster area S/N threshold ≥15 with smoothing, as described previously [ 49 ] . The information from combined MS and MS/MS analysis was applied for protein identifi cation search against SwissPort database (545388 protein sequences; released on June 23, 2014) by the Mascot search engine (version 2.2.04; Matrix Science) and BioTools software (version 3.5; Bruker Daltonics). The search parameters were defi ned as follows: taxonomy of Homo sapiens, trypsin digest with a maximum of one missed cleavage, fi xed modifi cation of cysteine carbamidomethylation, variable modifi cation of methionine oxidation, monoisotopic peptide mass (MH + ), mass range unrestricted, pI of 0-14, precursor tolerance of 100 ppm, and MS/MS fragment tolerance of 0.5 Da. Before database searching, we removed the known contaminant ions from the spectra, which correspond to human keratin and trypsin autolysis peptides. The top 20 hits for each protein search were reported. The protein candidate was reported only when it met the maximum number of matched peptides and a pI value nearest to the observed value. Each isoform of a protein family, which was identifi ed, was considered to be a distinct protein for analysis, as described previously [ 49 ] .
Specifi c siRNA for ERp57 (siERp57-1) was purchased from QIAGEN (Cat. No. SI02654771). An in-house designed siRNA (si-ERp57-2) and nonspecifi c siRNA (Scrambled siRNA) were synthesized by GenePharma Co. (Shanghai). Scrambled siRNA (5′ UUC UCC GAA CGU GUC ACG UTT ACG U 3′), which displays no homology to EV71 or the human genome, was used as the negative control (NC) in this study. RD cells were transfected with siRNAs with HiPerFect transfection reagent (Qiagen) according to the manufacturer's instructions. The effi ciency of knockdown was measured by qRT-PCR and Western blot assays.
RD cells were preincubated with OM for 4 hours or transfected with siRNAs for 48 hours, the cells were washed twice with phosphate buffered saline (PBS) and infected with EV71 at MOI of 1. Time was set to zero after adsorption for 1 h. The culture media were removed and cells were washed twice with PBS to remove unattached virus before adding 0.2 mL of DMEM medium containing 2% FBS to each well. To quantify the intracellular viral RNA or extracellular viral RNA in the virions, total RNA was isolated from infected cells or culture media at different time-points and further subjected to qRT-PCR assays [ 30 ] .
Cells (HEK 293) were set in 12-well dishes overnight, then transfected with 100 ng of pRF or pIRES plasmid. 24 hours post-transfection, cells were treated without or with OM at a fi nal concentration of 15 or 30 μM for 12 hours. The cells were then harvested and cell lysates were applied for luciferase assays using a Dual Luciferase Assay System (Promega, WI) in accordance with manufacturer's instructions.
Results were expressed as mean ± standard deviation (SD). All statistical analyses were carried out with SPSS, version 14.0 software (SPSS Inc.). Two-tailed Student's t test was applied for two group comparisons. A p value <0.05 was considered statistically signifi cant.
|
Angiotensin-converting enzyme 2 (ACE2), a zinc-metallopeptidase, is a recently identified member of the renin-angiotensin system that degrades angiotensin (Ang) II to Ang-(1-7) [1, 2] . ACE2 is also a receptor for the coronavirus that causes severe acute respiratory syndrome (SARS) [3] . Although ACE2 is found in many tissues, it is highly expressed in the kidney, particularly within cells of the proximal tubule [4, 5] . Because it decreases levels of the vasoconstrictor Ang II and generates the vasodilator Ang-(1-7), ACE2 may protect against hypertension [6] and renal disease [7, 8] . ACE2 has also been shown to prevent acute lung injury [9] .
ACE2 is shed at its carboxy-terminus from the plasma membrane in cells via "a disintegrin and metalloproteinase-17" (ADAM-17) pathway [10, 11] . Recently, soluble ACE2 has been detected in certain biological fluids, including urine, plasma, and cell culture medium, by enzyme-linked immunosorbent assay (ELISA), enzyme activity assay, or western analysis [10, 12, 13] . In this chapter, we describe a detailed method for measurement of ACE2 enzyme activity in biological fluids using a commercially available synthetic fluorogenic substrate for ACE2. We provide a practical, cost-effective, and high-throughput method to study the potential role of ACE2 shedding in renal and cardiovascular disease.
The following reagents must be prepared and brought to room temperature before starting the assay.
In the following section, we describe the ACE2 activity assay. First, we present the generation of the standard curve (Subheading 3.1), followed by measurement of ACE2 activity in biological fluids (Subheading 3.2).
Calculate the amount of working reagents to use. Add fluorogenic ACE2 substrate and protease inhibitors to the assay buffer (see Subheading 2, item 1) immediately before each experiment, to generate the ACE2 substrate/Assay buffer solution, to achieve concentrations as follows: 15 μM ACE2 substrate, 1 mM NEM, and 1 mM PMSF (Table 1) . Seventy-five micro liter of the ACE2 substrate/assay buffer solution will be used in each reaction (see below Subheading 3.1.3). The protease inhibitors are added to prevent substrate hydrolysis in biological solutions [14] . For the standard curve generation, we also routinely add protease inhibitors to the assay buffer.
Dilute 10 μg/ml stock of human or mouse rACE2 1:30 in assay buffer to a concentration of 333.33 ng/ml for standard #1 (Table 2 ). Perform 1:2 serial dilutions of the preceding ACE2 standard in assay buffer for standards #2-6, with the lowest concentration at 10.42 ng/ml. Standard #7 is a blank that should contain no rACE2 (instead add 15 μl assay buffer alone). Add 15 μl/ well of the serially diluted ACE2 standards into a total volume of 100 μl of ACE2 enzymatic reaction solution (see Table 2 ) to Fig. 1b) , with a range of ACE2 detection from 1.56 to 50 ng/ml. The linear equations generated for human and mouse rACE2 are used to convert the RFU to ACE2 concentrations in human or mouse biological fluids, respectively. In our hands, the RFU signal reaches saturation when the rACE2 concentration is greater than 50 ng/ml. The RFU signal becomes very weak or undetectable when the rACE2 concentration is less than 1.56 ng/ ml. This range approximates the detection limits for human ACE2 using a commercial ELISA, which we previously reported to range from 0.39 to 25 ng/ml [13] .
Using this assay, we have measured ACE2 activity in human urine samples collected from renal transplant recipients [13] . We have also measured ACE2 activity in urine samples from male FVB/N mice, with and without streptozotocin (STZ)-induced diabetes, and in conditioned culture medium collected from primary cultures of mouse proximal tubular (PT) cells, after up to 72 h incubation in medium. Mouse proximal tubule (PT) cells derived from C57BL6 mice were grown in a defined medium of DMEM-F12 (1:1), supplemented with insulin (5 μg/ml), transferrin (5 μg/ml), selenium (5 ng/ml), hydrocortisone (50 nM), and 3,3′,5-triiodo-L-thyronine (2.5 nM), without fetal bovine serum (FBS). Serum has previously been demonstrated to contain ACE2 enzymatic activity, and should be avoided in this assay [15] . All biological samples are collected and placed on ice, aliquoted and then centrifuged at 12,000 × g for 5 min at 4 °C. Supernatants are collected, and stored at −80 °C until the assay is performed.
We also have measured ACE2 activity in plasma, from mice with or without STZ-induced diabetes [8] . Blood is collected in chilled eppendorf tubes, and plasma is separated by centrifugation at 3000 × g for 10 min at 4 °C, and stored at −80 °C until time of the assay.
1. We dilute urine samples at least 1:2 and plasma samples 1:7.5 with assay buffer. For conditioned cell culture medium, undiluted samples are used. For each well on the 96-well plate, use 15 μl of diluted urine (e.g., 7.5 μl urine plus 7.5 μl assay buffer), 15 μl of diluted plasma (2 μl plasma plus 13 μl assay buffer), and 15 μl undiluted culture medium in a final volume of 100 μl of enzymatic reaction. If ACE2 activity falls outside the detection limits of the assay, a lower or higher dilution may be required. 3. Calculate the ACE2 concentrations from the RFU in biological fluids using the linear equations generated for human or mouse rACE2, as appropriate (Fig. 1) . The ACE2 concentrations must be corrected for the dilution factor to obtain the actual concentrations in the undiluted samples. ACE2 concentration in biological fluids is typically expressed as ng/ml. 4 . The amount of urinary ACE2 can be corrected for the creatinine concentration in the urine samples, as a control for urinary dilution, and reported as ng/μg creatinine. ACE2 activity in culture medium can be corrected for the cell protein amounts on the culture dishes, and reported as ng/μg protein.
We have determined the intra-and inter-assay CVs of the ACE2 activity assay for biological samples, and the results support the reliability of this assay. Three samples each of human and mouse urine, mouse PT cell culture medium, and mouse plasma were assayed on six separate occasions in duplicate. The mean intra-assay CV for the assay is 4.39 ± 0.74 % in human urine, 3.04 ± 0.61 % in mouse urine, 1.43 ± 0.20 % in cell culture medium, and 3.84 ± 0.63 % in mouse plasma. The mean inter-assay CV value is 13.17 ± 0.77 % in human urine, 7.01 ± 1.04 % in mouse urine, 9.47 ± 0.22 % in cell culture medium, and 10.92 ± 1.88 % in mouse plasma (Table 3 ).
We have determined the extent of inhibition of ACE2 activity by MLN-4760 (10 −6 M) or DX600 (10 −6 M, linear form, AnaSpec), using human and mouse rACE2. Specificity can also be assessed, and in this regard we have used the ACE inhibitor captopril. At concentrations of human and mouse rACE2 ranging from 0 to 400 ng/ml, ACE2 enzymatic activity (measured after 16 h) is not blocked by captopril (10 −5 M) (Fig. 2) . Indeed, as an internal control, one may add captopril (10 −5 M) or another ACE inhibitor to each reaction well (as a component of the assay buffer) in the ACE2 assay. Both MLN-4760 and DX600 strongly inhibit the activity of human rACE2 over a wide range of concentrations of the recombinant enzyme (1.56-400 ng/ml), with the degree of inhibition ranging from 74.5 to 99.0 % for MLN-4760, and from 62.2 to 100 % for DX600 (Fig. 2a) . At high concentrations of human rACE2 (100-400 ng/ml), MLN-4760 has a more potent inhibitory effect (98.7-99.0 %) compared to DX600 (62.2-90.0 %).
It is important to note that only MLN-4760 (and not DX600) significantly blocks the activity of mouse rACE2, with the degree of inhibition ranging from 87.6 to 98.2 % (Fig. 2b) . DX600 exerts no inhibitory effect on mouse rACE2 activity in this assay. Indeed, DX600 (10 −6 M) actually enhanced RFU at lower levels of mouse rACE2 (3.13-50 ng/ml). In this regard, Pedersen et al. [16] measured the dissociation constant (Ki) between DX600 and human or mouse ACE2 in activity assays with the same ACE2 substrate, and reported the Ki for human ACE2 to be significantly lower (0.040 ± 0.005 μM) than the Ki for mouse ACE2 (0.36 ± 0.03 μM), suggesting that DX600 has higher affinity for human ACE2. The inhibitory effect of another commercially available conformational variant of DX600 (cyclic form) on ACE2 has not yet been tested in our assay. However, Ye et al. have reported that the disulfide bridged cyclic variant of DX600 (Bachem) had no inhibitory effect The inter-assay CV is calculated for each sample by dividing the standard deviation of six measurements by the mean ACE2 activity of six measurements on mouse or rat ACE2, even at high concentrations, but effectively inhibited human rACE2 [17] . We have also examined the dose-dependent effects of MLN-4760 and DX600 (linear form) on ACE2 activity (measured at 16 h), using mouse rACE2 at concentrations from 0 to 50 ng/ml. An increase in MLN-4760 concentration from 10 −6 to 10 −5 M does not significantly enhance the assay characteristics (97.1-100 % inhibition for 10 −5 M, 95.0-99.3 % for 10 −6 M). Indeed, a highly linear relationship between the ACE2 activity (RFU) and mouse rACE2 concentration is observed for both concentrations of MLN-4760 (R 2 = 0.9950, p < 0.001 for 10 −5 M; R 2 = 0.9954, p < 0.001 for 10 −6 M). In contrast, DX600 has no significant inhibitory effect on mouse rACE2 activity even at concentrations as high as 10 −5 M. Similarly, two groups have reported that 10 −6 M DX600 only partly (and ineffectively) blocks mouse recombinant or mouse kidney ACE2 activity [16, 17] . However, significant inhibition of mouse ACE2 has been observed with DX600 at 10 −5 M using an assay buffer maintained at pH 6.5 (16) . In rat PT segments, we have used the linear form of DX600 (10 −6 M) (obtained from Phoenix Pharmaceuticals) to demonstrate inhibition of Ang-(1-7) formation from Ang-(1-10), suggesting inhibition of rat ACE2 [5] . In rat kidney cortex, Ye et al. showed that DX600 (10 −6 M) only partly inhibited ACE2 activity, measured at pH 6.5 [17] . Accordingly, because of the relatively poor inhibition of rodent ACE2 by DX600, for mouse biological samples we use 10 −6 M of MLN-4760 for all experiments. For the ACE2 activity assay in human biological fluids, either MLN-4760 or DX600 can be used.
To determine if concentrations of ACE2 fluorogenic substrate lower than 11.25 μM can be used for the assay, we have studied the effect of various substrate concentrations (ranging from 1.41 to 11.25 μM) on mouse rACE2 enzyme activity (Fig. 3) . At low substrate concentrations (1.41 μM and 2.81 μM), mouse rACE2 activities are reduced to only 0-25 % of values obtained at 11.25 μM substrate concentration (Fig. 3a) . On the other hand, a highly linear relationship between the activity (RFU) and rACE2 concentration is observed for substrate concentration of 5.63 μM or 11.25 μM (R 2 = 0.9994, p < 0.001 for 11.25 μM; R 2 = 0.9903, p < 0.001 for 5.63 μM), with the same detection limits (1.56-50 ng/ml) (Fig. 3b) . Accordingly, in our assays with biological samples, we routinely use at least 11.25 μM of the ACE2 substrate, although use of 5.63 μM substrate concentration would be acceptable. However, we would not recommend use of ACE2 substrate concentrations lower than this.
Is it necessary to incubate samples for 16 h prior to measurement of fluorescence? We have addressed this question by measuring ACE2 activity at different time points (2, 6, 16, and 24 h incubation), with mouse rACE2 concentrations ranging from 0 to 200 ng/ml (Fig. 4a) . A time-dependent increase in ACE2 activity is observed for all mouse rACE2 concentrations. By 16 h of incubation, the RFU signal has reached saturation for mouse rACE2 concentrations between 100 and 200 ng/ml. The linear detection range for ACE2 with 16 h incubation is between 1.56 and 50 ng/ml (Fig. 4b) . In contrast, for a 2 h incubation time, a linear relationship still exists between ACE2 activity and ACE2 concentration for mouse rACE2 between 100 and 200 ng/ml (Fig. 4a) . Thus, the detection limit of the assay for 2 h incubation is extended between 1.56 and 200 ng/ml (R 2 = 0.9981, p < 0.001), although RFU values are significantly lower at all ACE2 concentrations (see Fig. 4 ). In our lab, we routinely select 16 h as an incubation time for the assay, since ACE2 activity in most biologic samples we have studied is within the 16 h detection limit (i.e., up to 50 ng/ml). If there is a relatively high ACE2 protein level in the biological fluid of interest, the incubation time can be reduced to as short as 2 h.
As noted, using this assay we perform endpoint RFU measurements, calculate the ACE2 protein concentrations from standard curves, and typically record this value in ng/ml for biological fluids, and not as ACE2 activity per se. ACE2 activity can also be reported as RFUs per volume of sample per unit time (RFU/ml/ min or RFU/ml/h). To present data as ACE2 enzymatic activity, the 96-well plate should be read on a fluorescence reader in kinetic mode. The maximum velocity (V max , RFU/min) in each sample can be determined, and adjusted for the inhibitor-containing wells. Specific ACE2 activity (pmole/min) is then calculated from the adjusted V max using a conversion factor (pmol/RFU), determined from a calibration standard curve constructed using a fluorescent peptide fragment (MCA-Pro-Leu-OH, Bachem, Bubendorf, Switzerland, Catalog number M-1975). Final ACE2 enzymatic activity in biological fluids can be corrected for the sample dilution factor, and expressed as pmole/ml/min.
Pedersen et al. [16] have reported that activity assays of human, rat, and mouse ACE2 are highly pH-dependent, with maximal hydrolysis rate of ACE2 substrate at assay buffer pH 6.5, and (b) Standard curve generated for mouse rACE2 for 16 h incubation. The relationship is highly linear for rACE2 concentrations between 0 and 50 ng/ml minimal activity at pH 8.0 [16] . We use an assay buffer at pH 6.8 for our ACE2 activity assay, and we recommend measuring the pH of biological samples prior to conduct of the assay. To limit any effect of pH of biological samples on the assay, we advise diluting samples with the assay buffer as much as possible.
For measurement of plasma ACE2 activity, avoidance of chelators such as EDTA in the blood collection tubes is recommended, since chelation could inhibit the activity of the metallopeptidase ACE2.
Urine samples may have significant background fluorescence at 405 nm. For example, in undiluted urine samples from mice with gene deletion of ACE2, we typically obtain background fluorescence readings greater than 2000 RFU for 7.5 μl urine in the assay. Because such autofluorescence may interfere with the fluorescence signal and reduce the ability to detect ACE2, we suggest dilution of urine samples (e.g., in wild-type mice we typically dilute the urine sample 1:15 with assay buffer to 15 μl/well).
Does freeze/thaw of biological fluids affect ACE2 activity? We have measured ACE2 activity in samples of human and mouse urine, and mouse PT cell culture medium after as many as three freeze/thaw cycles. Interestingly, we found no significant difference in ACE2 activity between any freeze/thaw cycle for all three sample categories (data not shown, p > 0.05). Nonetheless, for biological samples it is likely prudent to restrict use of repetitive freeze/thaw for this assay. Furthermore, we do not recommend repetitively freezing and thawing recombinant ACE2 protein, as we found that ACE2 activity can be significantly lost after a second freeze-thaw cycle.
In previous studies, we reported that urinary ACE2 activity is enhanced in renal transplant patients with diabetes, using this enzyme activity assay [13] . In preliminary data, we also found that incubation of mouse PT cells with high glucose concentrations enhances ACE2 activity in the culture medium, suggesting increased shedding of ACE2 from the cell membrane (data not shown). Accordingly, we have determined if high glucose concentrations alone can affect the assay. As shown in Fig. 5a , ACE2 activity in mouse PT cell culture medium with a high d-glucose concentration (25 mM) is not significantly different from that in the same medium with a normal d-glucose concentration (7.8 mM) (p > 0.05, n = 3).
We have also studied the effect of increasing concentrations of creatinine (Sigma-Aldrich) and urea (Sigma-Aldrich) on the assay, since these substances are relatively abundant in urine. Using mouse rACE2, we found that neither creatinine (0.01-3.75 mM) nor urea (1.5-300 mM) had any significant effect on ACE2 activity when added to assay buffer (Fig. 5b, c) .
Finally, since we reported a correlation between urinary protein and urinary ACE2 levels [13] , we have examined the effect of exogenous albumin on the ACE2 activity assay. For these experiments, increasing concentrations of albumin (Cohn Fraction V, Sigma-Aldrich, Catalog number A8806) were added to urine samples (1 μl/well) from C57BL6 mice with ACE2 gene deletion, and ACE2 activity was measured following incubation with increasing concentrations of mouse rACE2 (0-25 ng/ml). For these experiments, the average albumin concentration in the urine samples from mice with ACE2 gene deletion was 0.15 μg/ml. As shown in Fig. 5d , addition of exogenous albumin (0.75-750 μg/ml) had no effect on rACE2 activity.
|
their association with enteric diseases is not well documented, with the exception of turkey and mink astrovirus infections [2] .
Family Astroviridae is separated into two genera. Viruses of the genus Mamastrovirus infect mammals, and those of Avastrovirus infect avian [3] . Avastroviruses include duck astrovirus 1 (DAstV-1), turkey astrovirus 1 and 2 (TAstV-1 and TAstV-2), and avian nephritis virus (ANV) [2] . Mamastroviruses appear to have a broad host range, including human [1] , sheep [4] , cow [5] , pig [6] , dog [7] , cat [8] , red deer [9] , mouse [10] , mink [11] , bat [12] , cheetah [13] , brown rat [14] , roe deer [15] , sea lion and dolphin [16] , and rabbit [17] .
Porcine astrovirus (PAstV) was first detected by EM in the feces of a diarrheic piglet [6] and was later isolated in culture [18] . Molecular characterization of the capsid (ORF2) gene from this isolate followed some years later [19] . Since then, research groups have successfully used PCR approaches to investigate the presence and diversity of PAstV [20] [21] [22] . PAstV has been detected in several countries, including South Africa [23] , the Czech Republic [20] , Hungary [22] , Canada [21] , and Colombia [24] . In South Korea, there have been studies done on astrovirus but were only limited to its detection in human infection. There has been no attempt yet to know the extent of astrovirus infection in the pig population of the country. It was, therefore, the aim of this study to investigate the genetic groups of Korean PAstV in domestic pigs and wild boars and to identify the incidence of co-infection with other porcine enteric viruses as well.
A total of 129 fecal samples of domestic pigs (60 piglets under 3 weeks old, 45 weaned pigs, 14 growing-finishing pigs, and 10 sows over 1 year old) was collected from six piggery farms with good breeding facilities in four provinces of South Korea from January to June 2011. Out of these collected samples 90 were from diarrheic and 39 were from non-diarrheic pigs. A total of 146 fecal samples of wild boars over 1 year old was collected from the wildlife areas in five provinces of South Korea during the hunting season from December 2010 to January 2011. Out of these collected samples 34 were from diarrheic and 112 were from non-diarrheic boars.
Viral RNA was extracted from the feces using TRIzol LS b according to the manufacturer's instructions. PAstV was detected in fecal specimens by RT-PCR, as previously described [22] , with primers specific for the RdRp and ORF2 regions of PAstV (PAstV-F, 5 0 -TGACATTTT GTGGATTTACAGTT-3 0 and PAstV-R: 5 0 -CACCCAGG GCTGACCA-3 0 ). The RT-PCR process resulted in the amplification of a 799-nt-long fragment at an annealing temperature of 45°C. Products of the expected size were cloned with the pGEM-T Vector System II TM (Promega, Cat. No. A3610, USA). The cloned gene was sequenced with T7 and SP6 sequencing primers on an ABI Prism Ò 3730XI DNA Sequencer (Applied Biosystems, Foster City, CA, USA) at the Macrogen Institute (Macrogen, Seoul, Korea). The sequences of all the positive samples for PAstV were submitted to GenBank under accession numbers JQ696831-JQ696856. The astroviruses used in this study are listed in Table 1 along with their GenBank accession numbers.
To investigate the relationship between Astroviruses and other economically important viral diseases that cause diarrhea in piglets in Asia, screening tests were conducted to detect Porcine Epidemic Diarrhea Virus (PEDV), Transmissible Gastroenteritis Virus (TGEV), and Porcine Group A Rotavirus (GAR), as previously described [25] . The primer pairs used in this study were P1 (TTCTGA GTCA CGAACAGCCA, 1466-1485) and P2 (CATATG CAGCCTGCTCTGAA, 2097-2116) for the S gene of PEDV, T1 (GTGGTTTTGGTYRTAAATGC, [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] and T2 (CACTAACCAACGTGGARCTA, 855-874) for the S gene of TGEV, and rot3 (AAAGATGCTAGGGACAAA ATTG, 57-78) and rot5 (TTCAGATTGT GGAGCTA TTCCA, 344-365) for the segment 6 region of group A Rotavirus. The sizes of the expected products of multiplex RT-PCR were 859 bp for TGEV, 651 bp for PEDV, and 309 bp for rotavirus, which could be differentiated by agarose gel electrophoresis.
Out of the 129 domestic pig fecal samples tested, 25 were positive for PAstV. Prevalence of PAstV in weaned pigs (35.6 %, 16/45) was higher than that in suckling piglets (15.0 %, 9/60) and in growing-finishing pigs (9.1 %, 1/14) ( Table 2 ). Only one wild boar which is coming from the province of Gyunggi tested positive for PAstV (0.7 %, 1/146). The low prevalence of PAstV in wild boars might have been due to the fact that usually older animals (over 1 year old) have lesser susceptibility to infection and generally wild pigs are more resistant to many diseases than the domesticated ones.
The percentage of samples that were PAstV-positive differed among the six pig farms: Chungchong A, 28.6 % (10/35); Chungchong B, 22.9 % (8/35); Kangwon, 20.0 % (4/20); Gyunggi A, 7.1 % (1/14); Gyunggi B, 20.0 % (3/15), and Gyungsang, 0 % (0/10). The low or no incidence of PAstV in Gyunggi A (growing-finishing pigs) and Gyungsang (sows) can likewise be attributed to the lesser susceptibility of adult pigs to infection. The proportion of non-diarrheic and diarrheic pig fecal samples was 14.0 % (18/129) and 6.2 % (8/129), respectively. These results suggest that PAstV is widespread in South Korea regardless of the disease status (with or without clinical manifestations) of pigs. Although Astroviruses are highly prevalent in young pigs and are mostly present in diarrheic pigs, PAstV is a common finding as well in the fecal samples of apparently healthy pigs [21, 22] . The clinical significance of PAstV infection remains to be clarified.
The clinical symptoms of diarrhea are frequently reported to be associated with Rotavirus, Coronavirus, and Calicivirus-like infections in piglets [6, 18, 20, 21, 26] . Although PEDV and TGEV infections were not identified in any of the fecal samples, porcine GAR infection was identified in 6.2 % (17/275) of the samples collected from suckling pigs under 3 weeks old (Table 2) . Furthermore, coinfection with PAstV and GAR was observed in two cases (one in diarrheic and one in non-diarrheic pig fecal samples). However, it is not cleared yet if GAR is directly associated with astrovirus infection in pigs.
All Astrovirus sequences were aligned initially with the ClustalX 1.8 program [27] . The nucleotide sequences were translated and the nucleotide and amino acid sequence identities among the astrovirus strains were calculated with BIOEDIT 7.053 [28] . Bayesian trees were generated with MrBayes 3.1.2 [29, 30] using best-fit models which were selected with ProtTest 1.4 [31] for amino acid alignment. Markov Chain Monte Carlo (MCMC) analyses were run with 1,000,000 generations for each amino acid sequence.
Bayesian posterior probabilities by MrBayes 3.1.2 were estimated on the basis of a 70 % majority rule consensus of the trees. For each analysis, a chicken astrovirus (NC_003790) was specified as the outgroup and a graphic output was produced with TreeView 1.6.1 [32] . The best modelssof the RdRp and ORF2 amino acid sequences were obtained using ProTest 1.4, which showed WAG ? I ? G and WAG ? G ? F, respectively, according to the results of the Akaike information criterion (AIC).
BI trees for the RdRp (Fig. 1) and ORF2 (Fig. 2 ) amino acid sequences revealed the presence of five unrelated PAstV lineages. The first lineage (PAstV 1) contained the PoAstV12-3 strain (HM756258) from Canadian pig and two porcine strains (Y15938 and AB037272) derived from Japanese pigs, Interestingly, rat astrovirus (HM450382) and porcine PoAstV CC12 (JN088537) have formed two different lineages on the Bayesian trees for the RdRp and ORF2 amino acid sequences (Figs. 1, 2) . Astrovirus strains contained in Group 1 (G1), 2 (G2), and 4 (G4) on the two Bayesian trees showed similar topologies. However, Astrovirus strains in Group 3 (G3) on the Bayesian tree for the ORF2 amino acid sequence were divided into G3 and Group 5 (G5) on the Bayesian tree for the RdRp amino acid sequence (Figs. 1, 2) . A strain isolated from a Hungarian wild boar in 2011 [33] belonged to PAstV 4 or Group 4 (G4) that also contained PAstK31 derived from Korean wild boar.
A previous study suggested that the number of PAstV lineages extends to a total of five, all of which most likely represent distinct species of different origins [34] . However, with the available AstVs research data from countries around the world, future studies could unveil diverse genetic lineages. In this study, the porcine astrovirus strains appeared to be phylogenetically related to not only prototypical human astroviruses (as was already known) but also the recently discovered novel human strains. This finding suggests the existence of multiple cross-species transmission events between the hosts and the other animal species.
Several recent studies have shown that bats form multiple independent lineages [35, 36] . Bat astrovirus strains in this study also showed independent lineages and specifically, the LD71 (FJ571067) strain had a close relationship with astrovirus strains of human, sheep, mink, and sea lion (Figs. 1, 2) .
A previous study suggested that porcine AstVs have played an active role in pigs in the evolution and ecology of the Astroviridae [21] . Recent studies have shown evidence of multiple recombination events between distinct PAstV strains and between PAstV and human astrovirus (HAstV) in the variable region of ORF2 [24] , as well as interspecies recombination between porcine and deer astroviruses [37] .
A study of the molecular epidemiology and genetic diversity of human astrovirus in South Korea from 2002 to 2007 revealed genotype 1 to be the most prevalent, accounting for 72.19 % of strains, followed by genotypes 8 Korean PAstVs are shown in bold prints, and strains isolates from Korean and Hungarian wild boar are marked with a star and an arrow, respectively. The numbers above the nodes represent posterior probabilities (9.63 %), 6 (6.95 %), 4 (6.42 %), 2 (3.21 %), and 3 (1.6 %) [38] . This finding suggests that little interspecies (between human and pig) transmission has occurred until now in South Korea.
In conclusion, this study extends current knowledge of PAstV in wild boar and domestic pig. A more extensive study should be done on wild life PAstVs to further elucidate their potential role in the epidemiological landscape of the astrovirus infection in domestic pig population. To a greater length, continuous surveillance on the prevalence of both PAstVs will provide a wider understanding of the possible cross-species or human transmissions, in particular.
|
Porcine epidemic diarrhea virus (PEDV) causes severe entero-pathogenic diarrhea in piglets, especially in neonates, and the disease has a high mortality rate which can reach 80% in certain situations [1] . The disease was first recognized in England in 1971 [2] , and since then, outbreaks of the disease were often reported in Europe and Asia [3] [4] [5] [6] . Since the 1990s, a periodic vaccination strategy has been applied on pig farms nationwide to control the disease but these vaccines were not completely effective in preventing the disease, which led to a growing loss of newborn piglets in Fujian [7, 8] .
PEDV is an enveloped, single-stranded RNA virus belonging to the family Coronaviridae [9] [10] [11] . Its genome contains six ORFs, including pplab (pol), spike (S), membrane (M), ORF3, small membrane (sM), and nucleocapsid (N) [12, 13] . The ORF3 encodes an ion channel protein and regulates virus production [14] , and its loss might result in attenuation of the virus in the natural host [11] . The differentiation of ORF3 could be a marker of adaptation to cell culture and attenuation of virus [15] , which could be a valuable tool to study the molecular epidemiology of PEDV [16] . The variation of field isolates of PEDV might change the genotypes and may be one of the possible reasons for the outbreaks in immunized pigs in Hebei, China [17] . Similar results were demonstrated in our primary study on field samples from 3 different swine farms in Fujian [18] . Thereby it is necessary to analyze the genetic heterogeneity of PEDV to find out which genotype prevails in Fujian. In this study, the ORF3 gene of PEDV field samples from different farms in Fujian province were cloned and sequenced for genetic diversity analysis.
Partial of intestine or stool specimens were taken individually from the acute enteritis and watery diarrhea of piglets from different big swine farms in the Fujian province between 2010 and 2012, and used for PEDV detection through PED Ag Test Kit (Bionote, Seogu-Dong, Korea). PEDV positive samples were used for sequence analysis and phylogenetic analysis. Intestinal samples were homogenized with 9 times of phosphate-buffered saline (PBS). The suspensions were then vortexed and centrifuged for 10 min at 1,700 × g. The supernatants were collected and stored at −80 °C before utilization.
Total RNA was extracted from the supernatants of the homogenized samples with the RNAiso Plus agent (Takara, Dalian, Japan) according to the manufacturer's instructions. The forward and reverse primers [18] , ORFS 5'-ACCGAGTTGAGACATACA-3' and ORFR 5'-GGAATAGAACCGTTAGACAT-3', were designed to amplify the ORF3 gene from the extracted RNA using Primescript ® One Step RT-PCR Kit Ver.2 (Takara, Dalian, Japan) under the following conditions: reverse transcription at 50 °C for 30 min, denaturation at 94 °C for 2 min, 30 cycles of denaturation at 94 °C for 30 s, annealing at 55 °C for 30 s and extension at 72 °C for 1 min. The RT-PCR products were analyzed by 1.5% agarose gel electrophoresis and visualized by ultraviolet illumination after ethidium bromide staining. Bands of the corresponding size of the gene were excised and the synthesized DNA was purified using QIAquick Gel Extraction Kit (QIAGEN, Dusseldorf, Germany) according to the manufacturer's instructions, then sequenced by Takara Company.
The reference strains used for the sequence analysis were described in Table 1 . Alignment and phylogenetic analysis of the nucleotide sequences of the ORF3 gene were performed with ClustalW method by Mega 4.0 program [19] . The antigenic indexes of the sequence were predicted using DNAMAN program. A phylogenetic tree was constructed with nucleotide and deduced amino acid sequences using the bootstrap neighbor-joining method separately [20] . The reliability of topologies was estimated by performing bootstrap analysis with 1,000 replicates. In this study, 27 samples were found to show positive results in PEDV detection. They were used to amplify the ORF3 gene of PEDV, and then the gene was cloned and sequenced for alignment and phylogenetic analysis. Nucleotide sequence analysis indicated that all the samples were separated into two groups (Figure 1 ). The ORF3 gene of all the samples, except P55, was 675 bp in length and encoded a protein of 224 amino acids, which was similar to 10 reference strains. Six of them (F422, P1, P4, P9, P14, and P15), together with three reference strains had eight unique point mutations and formed a subgroup in Group 1. However, only one local strain, P55, was 626 bp in length and encoded a protein of 92 amino acid, which was similar to five reference strains (CH/GSJIII/07, CV777 truncated ORF3, CH/BJ/2011 truncated ORF3, DBI865 truncated ORF3, Zhejiang-08) in Group 2. All the Group 2 strains had a significant deletion at nt 245-294, except that the attenuated DR13 strain had a similar deletion at nt 244-294.
In terms of predicted amino acid sequence, the Subgroup 1, including six local strains, had two unique mutations, from L to S at 25 and from C to F at 107 respectively ( Figure 2) . It also had a unique point mutation (from V to F at 80, Figure 2 ), which was similar to Group 2 strains (Figure 2 ). P55 was found to have a long length deletion from 82 aa to the end, which existed in most of the Group 2 strains (Figure 2) . No difference was found in the other 20 Fujian samples when compared to the relevant reference strains.
To analyze the phylogenetic relationships of the 27 Fujian samples and the reference strains from various parts of the world, we constructed a neighbor-joining phylogenetic tree using their ORF3 amino acid sequences (Figure 3 ). The results indicated that all the local stains were separated into three potential clusters, which might have different origins based on the topology. P55 was clustered into Group 2, while the others were clustered into Group 1. In Group 1, 6 Fujian samples clustered into one subgroup (Subgroup 1) and the other 20 samples were clustered into Subgroup 3.
Although the bi-combined attenuated vaccine against TGEV and PEDV infection was authorized for utilization on swine farms in Fujian province, outbreaks of PED have caused tremendous economical losses [21] . It is necessary to characterize the genetic sequence of the PEDV field samples and find out how the prevalence happened. In this study, 27 PEDV Fujian samples were identified, whose ORF3 gene sequence analysis indicated that they could be separated into two general groups. P55 with five reference strains were clustered into Group 2, which had a long length deletion in both of the nucleotide and amino acid peptide sequences. It indicated that this genotype might have prevailed in China as four strains in this group were collected from China but from different provinces (Table 1 ). In addition, the virulence of the virus was reduced in cell culture adaptation through the deletion mutant [15] , but those five strains with the region deletion were virulent field samples, which revealed that there might be other mutations related to the virulence.
Six Fujian strains, as well as three reference ones, were characterized by eight unique point mutations in the nucleotide sequence and three amino acid changes in the peptide. These were clustered into Subgroup 1 (Figure 1) . It was noteworthy that these samples were collected from four different geography regions, including South Korea (DX), Hubei in North China (AJ1102), Guangdong in South China (GD-A), and Fujian in South China (F422, P1, P4, P9, P14, and P15), respectively. Whether any recombination occurred among the field strains in these areas needs further study, but those mutations might be used in differentiation between PEDV groups or new DNA markers in the PEDV field strains. The ORF3 protein of reference strain CV777 had nine high antigenic indexes based on DNAMAN program analysis, which were located at amino acid 4-39, 41-64, 70-114, 118-124, 134-148, 158-174, 179-190, 192-207 , and 211-221 plots, respectively. The strains in Subgroup 1 had amino acid changes at 25, 80 and 107 (Figure 2 ), located at the antigenic regions 4-39 and 70-114, respectively, which might alter their antigenicities and can be used as a label to remark this subgroup. No difference was found in the other regions. Phylogenetic analysis indicated that all the strains were divided into 2 relevant groups (Group 1 and Group 2, Figure 3 ), and Group 1 was formed by three subgroups. None of the Fujian strains were observed to be close to vaccine development strain, CV777 [22] . That might explain why the vaccine was not efficient enough to prevent PED prevalence. In addition, as the samples were taken from farms that covered all the district of Fujian, the phylogenetic tree demonstrated that there might be three genotypes of PEDV prevailing in Fujian.
Similar results were achieved by Li et al. [23] although they investigated a different gene (S gene) in PEDV from nine farms in 2011. In their work, three new variants were identified from field diarrhea samples, which was similar to the status of Fujian local strains (F422, P1, P4, P9, P14, and P15) in Subgroup 1 (Figures 1-3) in this study. The presence of another two field isolates, which shared high sequence identity with the attenuated strain DR13 from South Korean, was similar to P55 in the classification of Group 2 in this study (Figures 1 and 2) . Both of the results demonstrated that the effectiveness of the CV777-based vaccine was influenced and might lead to the outbreak of severe diarrhea on China's pig farms. Table 1 . Tree topology was constructed using F84 model and bootstrap re-sampling (1,000 data sets) of the multiple alignments was used to test the statistical robustness of the trees obtained by NJ (using the program Neighbor from Mega v4.0 package) G1-G4: Divided groups by phylogenetic tree. The underlines represent the Fujian field samples in this study.
In conclusion, the PEDV field samples in Fujian province were characterized and compared by analyzing the sequence heterogeneity of ORF3 gene of PEDV. All the samples were separated into tw general groups, and the Fujian strain P55. In addition, five reference strains were clustered into Group 2, which had a long length deletion in both of the nucleotide and peptide sequences, showing a different genotype and might be involved in the variation of virulence. Six Fujian strains were clustered into Group 1, which showed another genotype with unique point mutations that might be used in differentiation between PEDV groups or new DNA markers and brought potential antigenic variation. Phylogenetic analysis revealed that the collected Fujian strains were very distant from the vaccine development strain CV777, which might be the reason why the vaccine was inefficient in controlling the disease. This study might help to choose an appropriate PEDV field strain as a vaccine candidate and efficiently prevent outbreaks of PED.
|
Accordingly, understanding the epidemiology of respiratory viruses is important for promoting preparedness to tackle this public health threat. [3] [4] [5] [6] [7] [8] The epidemiology of ARI-related respiratory viruses in developed countries with temperate climates has been well studied. 1, 5, 9 Contrary to the accumulating knowledge of ARIs in temperate regions, epidemiological research on acute respiratory viral illness in tropical and subtropical areas is limited, although the epidemiological diversity, according to local climate and latitude, has been well studied. 8, [10] [11] [12] The knowledge of the regional distribution of respiratory viruses is essential not only for local prevention and control of ARIs but also for global health decision-making. 13 To our knowledge, limited information is available on the epidemiology and clinical characteristics of respiratory viral infections in the United Arab Emirates (UAE). Although a few studies in nearby countries with similar meteorology have described the epidemiology of ARIs, the reports cover a limited population group. 14 Our study was designed to describe the molecular epidemiology of ARI-related respiratory viruses, including the seasonality of the viruses in the northern UAE for over 2 years.
We collected both upper and lower respiratory specimens for the analysis from all the patients who visited the Sheikh Khalifa Specialty Hospital (SKSH) with acute respiratory illness between November 2015 and February 2018. Physicians were frequently encouraged throughout the year via SMS and e-mails to perform viral real-time reverse transcription polymerase chain reaction (rRT-PCR) tests to diagnose suspected acute ARI cases. An ARI was defined as the simultaneous occurrence of at least one respiratory symptom or sign (cough, purulent sputum, sore throat, nasal congestion, rhinorrhea, dyspnea, wheezing, or injected tonsils) and at least one of the following systemic symptoms: fever, chills, myalgia, or malaise.
We used the Edwards harmonic technique method to measure the peak-to-low ratio. 15, 16 The Edwards technique is a geometrical model, which is an approach that fits a sine curve to a time series of frequencies by the use of ordinary regression methods. The peakto-low ratio was interpreted as a measure of relative risk that compares the month with the highest incidence (peak) with the month with the lowest incidence (low or trough). 15 The positivity rates for respiratory viruses during the discrete peak and low periods were compared using a direct method (χ 2 -test) to analyze statistical significance. 17 We applied the Chi-squared (χ 2 ) test and Fisher's exact test to paired nominal data. Student t test was applied to analyze the means of continuous data. A two-sided alpha level of 0.05 defined statistical significance. All comparative statistical analyses were conducted using SPSS version 18.0 software (SPSS Inc, Chicago, IL). Among the virus-positive cases, the admission rate was significantly higher in the elderly group than in the other two age groups ( Table 3 ). The number of FLU and HRV cases was large enough to evaluate the associated seasonality.
The peak-to-low ratio of FLU was 2.26 (95% CI: 1.52-3.35) and it had significant seasonality (P < 0.01). HRV was detected year-round and showed no significant seasonality. The peak-to-low ratio of HRV was 1.44 (95% CI: 1.00-2.34), which was insignificant (P = 0.22).
ARI is highly prevalent and is responsible for a high burden of disease in many countries. Respiratory viral infection is the most common cause of ARI, and predicting the timing of peak respiratory virus activity is important for improving disease control.
In this study, the overall positivity rate of respiratory viruses and the incidence of each virus by age group were similar to those reported by previous studies describing the epidemiology of respiratory viruses. 6, 7, 9, 18 Several studies have reported that men are more susceptible to viral infection, and have more vigorous immune and behavioral responses. [19] [20] [21] However, there was no difference in the positivity rate or the incidence of ARI by sex in our study. The higher positivity rate of respiratory viruses in the pediatric group was compatible with that reported in other studies. 9, 22 Of the 1362 patients included in our study, 388 (28.5%) with ARI were admitted for further management. Our data showed a lower admission rate for virus-positive than virus-negative patients (13.6% vs 37.3%), implying a better prognosis for virus-related ARIs than that for ARIs of other etiologies, as noted in several previous studies. 23, 24 The higher admission rate in the elderly group is also in agreement with that reported by earlier studies. 2, 24, 25 FLU was the most common respiratory virus in all age groups, and the positivity rate was 20.0%, which is similar to previous data reported from studies in Oman. 14,26 A/H3N2 was the most common subtype detected throughout the year. The high incidence of FLU and A/H3N2 may be related to the severe symptoms characteristic of these infections, which increase the likelihood of patients visiting the emergency department. Many studies have reported that ARI symptoms are more likely to be experienced by individuals infected with FLU than by those infected with other respiratory viruses. In addition, A/H3N2 infections are more severe than A/H1N1 or B infections. 27,28 FLU vaccination could have influenced the incidence of FLU and its subtypes in the present study. However, regional information on FLU vaccination is not available in the UAE.
In this study, HRV was the second-most commonly detected respiratory in all age groups. This finding is in agreement with other studies. 29, 30 Good accessibility of patients with common cold to emergency department might explain the higher incidence of HRV in the UAE. However, the incidence of HRV could have been overestimated.
Many patients tested positive for HRV at the emergency department, but actually had bacterial coinfections, and their more severe symptoms could have been attributable to the bacterial infections or combined comorbidities like asthma. RSV is known as the leading cause of ARI. 1, 9, 31 However, the incidence of RSV among positive cases in our study was lower than that reported in other publications. 31, 32 It is possible that the incidence of RSV was underestimated in the current study because comparatively small numbers of infants and young children were included. The hot and dry climate of the UAE could also be a possible explanation for the low incidence of RSV in this region. RSV is more common during the rainy season in tropical and subtropical areas, and is reported to have a low survival rate at high temperatures. 32, 33 Small numbers of HCoV, HAdV, HMPV, HBoV, HEV, and HPIV cases were sporadically detected in our study. Reports suggest that because of the mildness of symptoms and the self-limiting nature of these infections, patients may not seek medical care, and this may lead to an underestimation of the incidence of infection. 7,34 However, our data suggest that in the UAE, like in other temperate countries, a diverse set of respiratory viruses contribute to the ARI cases that compel patients to visit medical facilities, because of their severity. Because we have reported the incidence of such severe infections, we believe that the data in this study will be of value to medical institutions in the UAE.
Among the virus-positive cases in our study, the coinfection rate was 7.9% overall and 14.7% in the pediatric group. These are compatible with the values mentioned in previous reports. 9 However, several similarly designed studies showed higher rates of coinfection (from 13.2%-42.5%), particularly in young children. 7, 22, 35 HRV was the most common virus to be found in cases of coinfection with other viruses in our study; this finding was in agreement with that reported in the literature. 9, 30 Coinfection with FLU and HRV was common in our study although other studies have reported a negative association between the two viruses. 5,36 FLU and HRV may have been detected in coinfected patients more frequently because the incidence of both viruses was higher than that of the other respiratory viruses in this study.
A characteristic semi-seasonality pattern on respiratory viral infection rates was observed in our study. FLU was the main contributing organism to the seasonal pattern; the subtypes A/H1N1/pdm09 and FLU B contributed predominantly to the winter peak. Low temperatures and low specific humidity during the winter season in the UAE may be the cause of the peak in the number of FLU cases in the winter. 12 In addition to the major peak in the winter, a small peak in August was also observed for FLU (semi-seasonal pattern), which parallels the patterns reported from temperate and tropical countries in the Northern Hemisphere. 5, 8, 12 Most of the respiratory viral infections, including FLU, are more common in the winter in temperate countries. 8 However, FLU and RSV infections tend to show annual peaks in association with the rainy season in the tropical area. 8, 33, 37 In subtropical areas, previous studies have observed that respiratory viruses, including FLU, peaked in the coldest months in temperate areas. 38, 39 The UAE is located in a subtropical area and does not have a rainy season. Thus, we expected a peak of FLU in the winter season, as is common in other subtropical areas. 12 However, FLU in the UAE showed a two-peaks semi-seasonal pattern, which has also been observed in some other regions located at similar latitudes (Taiwan and Nepal). 8 The majority of the population of the UAE is exposed to a dry, air-conditioned environment for most of the day, especially during the summer season, because outside temperatures range from 39 to 45°C.
Despite this relatively controlled environment, the summer peak of FLU in the UAE could be explained by prolonged effective contact rates, because of the increased indoor activity, and lowered relative humidity, which have been reported to be related to the incidence of viral illnesses. 40, 41 This finding implies the potential necessity for sustained FLU vaccination campaigns and repeated vaccinations to reduce the severity of FLU infections, especially for high-risk individuals. 42, 43 Nationwide data are needed to confirm the semi-seasonal pattern of FLU in the UAE.
Our study was subject to some limitations. First, this was a single center study with relatively small sample size; moreover, the decision of whether to collect a sample from the patient for viral testing was at the physician's discretion. Physicians in the emergency department may have preferred to perform a rapid influenza antigen test rather than to conduct a multiplex rRT-PCR test, which requires a longer time to produce results. Hence, the incidence of respiratory viruses may have been underestimated because of the low sensitivity of the rapid antigen test, and because viruses, other than the FLU, may not have been identified even if present in the samples. 25, 44 Second, our results may not be generalizable to the entire population of the UAE because most of the study specimens were taken from patients in the Northern Emirates. In addition, our study could have underestimated the degree of seasonality in the UAE because the seasonality of viral infections tends to be more apparent when analyses include mild cases that require minimal conservative care (for example, outpatient care in the primary clinic). 45
To our knowledge, this is the first study to describe the rRT-PCRbased seasonal pattern of respiratory viruses related to ARI in the UAE. Moreover, the high sensitivity of the detection method enabled the collection of more reliable epidemiologic data. To date, very few studies have provided such data for the middle-eastern region. The seasonality of FLU was similar to that observed in temperate countries and was also consistent with a semi-seasonal pattern of incidence. This knowledge can serve as baseline data for more expansive future surveillance studies of respiratory viruses in the UAE and in the middle-eastern region.
We would like to acknowledge the physicians and employees of the department of pulmonology, intensive care units, and the emergency department of the Sheikh Khalifa Specialty Hospital who participated in the collection of study specimens. This project did not receive any financial or other support from any organization.
|
This paper builds on our report (Benkeser et al., 2020) written in response to a request by the U.S. Food and Drug Administration (FDA) for statistical analysis recommendations for COVID-19 treatment trials. We aim to help inform the choice of estimand (i.e., target of inference) and analysis method to be used in COVID-19 treatment trials. To this end, we describe treatment effect estimands for binary, ordinal and time-to-event outcomes.
Importantly, the interpretability of these estimands does not rely on correct specification of models. For binary outcomes, we consider the risk difference, relative risk, and odds ratio.
For ordinal outcomes, we consider the difference in means, the Mann-Whitney (rank-based) estimand, and the average of the cumulative log odds ratios over levels of the outcome. For time-to-event outcomes, we consider the difference in restricted mean survival times, the difference in survival probabilities, and the ratio of survival probabilities.
For each estimand, we give covariate adjusted estimators of these quantities that (1) leverage information from baseline variables and (2) are robust to model misspecification.
We introduce a new covariate adjustment method for ordinal outcomes, but use existing methods for binary and time-to-event outcomes. By incorporating baseline variable information, covariate adjusted estimators often enjoy smaller variance compared to estimators that ignore this information, thereby resulting in reductions in the required sample size to achieve a desired power.
To evaluate the performance of covariate adjusted estimators, we simulated two-arm, randomized trials comparing a hypothetical COVID-19 treatment versus standard of care.
Our simulated distributions are derived from two sources: longitudinal data on over 500 patients hospitalized at Weill Cornell Medicine New York Presbyterian Hospital prior to March 28, 2020, and a preliminary description of 2449 cases reported to the CDC from February 12 to March 16, 2020. We focused on hospitalized, COVID-19 positive patients and specified distributions for binary, ordinal and time-to-event outcomes based on information collected on intensive care unit (ICU) admission, intubation with ventilation, and death.
We conducted simulations using all three estimands when the outcome is ordinal, but only evaluated the risk difference when the outcome is binary and the restricted mean survival time and risk difference when the outcome is time-to-event.
After our aforementioned report (which contains some of our simulation results for ordinal and time-to-event outcomes), the FDA released a guidance for industry on COVID-19 treatment and prevention trials (FDA, 2020) . The guidance contains the following statement, which is similar to our key recommendation regarding covariate adjustment: "To improve the precision of treatment effect estimation and inference, sponsors should consider adjusting for prespecified prognostic baseline covariates (e.g., age, baseline severity, comorbidities) in the primary efficacy analysis and should propose methods of covariate adjustment."
There is already an extensive literature on the theory and practice of covariate adjustment, e.g., Yang and Tsiatis (2001) However, covariate adjustment is underutilized, particularly for trials with a binary, ordinal, or time-to-event outcome. Since many COVID-19 treatment trials focus on these types of outcomes, our goal is to demonstrate the potential benefits of covariate adjustment in these contexts. Recent examples of COVID-19 treatment trials include a trial of dexamethasone with 28-day mortality as the primary outcome (The RECOVERY Collaborative Group, 2020) and three trials of remdesivir with the following primary outcomes: clinical status on day 14 using a 7-point ordinal scale (Goldman et al., 2020) , time to clinical improvement (Wang et al., 2020) , and time to death (Beigel et al., 2020) .
The remainder of this paper is organized as follows. A brief background on covariate adjustment in randomized trials is provided in Section 2. Section 3 describes estimands is the sample proportion of bad outcomes among patients assigned to arm A = a. A covariate adjusted estimator of this quantity can be based on the standardization approach of Ge et al. (2011) , as indicated in the FDA COVID-19 guidance (FDA, 2020) . This estimator is identical to that of Moore and van der Laan (2009a) and for the risk difference it is a special case of estimators from Scharfstein et al. (1999) . First, a logistic regression model is fit for the probability of bad outcome given study arm and baseline variables. Next, for each participant (from both arms), a predicted probability of bad outcome is obtained under each possible arm assignment a ∈ {0, 1} by plugging in the participant's baseline variables and setting arm assignment A = 0 and A = 1, respectively, in the logistic regression model fit. Lastly, the covariate adjusted estimator of P (Y = 0 | A = a) is the sample mean over all participants (pooling across arms) of the predicted outcome setting A = a.
We consider three estimands when the outcome is ordinal, with levels 1, . . . , K. Without loss of generality, we assume that higher values of the ordinal outcome are preferable. In what follows we let (A, Y ) and ( A, Y ) denote independent treatment-outcome pairs.
Estimand 1: Difference in means (DIM). For u(·) a pre-specified, real-valued transformation of an outcome, the estimand is defined as:
In most settings, this transformation will be monotone increasing so that larger values of the ordinal outcome will result in larger, and therefore preferable, transformed outcomes.
Transformations could incorporate, e.g., utilities assigned to each level, as has been done in some stroke trials (Chaisinanunkul et al., 2015; Nogueira et al., 2018) .
Estimand 2: Mann-Whitney (MW) estimand. This estimand reports the probability that a random individual assigned to treatment will have a better outcome than a random individual assigned to control, with ties broken at random. The estimand is defined as:
Estimand 3: Log-odds ratio (LOR). We consider a nonparametric extension of the log-odds ratio (LOR) (Díaz et al., 2016) defined as the average of the cumulative log odds ratios over levels 1 to K − 1 of the outcome, namely LOR:
In the case that the distribution of the outcome given study arm is accurately described by a proportional odds model of the outcome against treatment (McCullagh, 1980) , this estimand is equal to the coefficient associated with treatment.
All three estimands are smooth summaries of the treatment-specific cumulative distribution functions (CDFs) of the ordinal outcome. The CDF for arm a ∈ {0, 1} evaluated
, and the corresponding probability mass function is denoted by f (j | a) = F (j | a) − F (j − 1 | a). The estimands can be equivalently expressed in terms of the CDFs as follows:
To estimate these quantities, it suffices to estimate the arm-specific CDFs and then to evaluate the summaries; such estimators are called plug-in estimators.
The unadjusted estimator of the CDF in each arm is the empirical distribution in that arm. The resulting plug-in estimator for the DIM is the difference between arms of sample means of the transformed outcomes. Also, the resulting plug-in estimator (denoted M ) for the MW estimand is closely related to the usual Mann-Whitney U-statistic U = n 0 n 1 M , where n 0 and n 1 are the total sample sizes in the two study arms.
Model-robust, covariate adjusted estimators are available for estimation of the MW estimand, e.g., Vermeulen et al. (2015) , and for the LOR estimand, e.g., Díaz et al. (2016) .
We use a slightly different approach as described below. It is an area of future research to compare the performance of our method to the those from related works.
Our covariate adjusted estimator of the CDF in each arm, presented in Appendix B of the Supporting Information, leverages prognostic information in baseline variables. It uses working models, i.e., models that are fit in the process of computing the estimator but which we do not assume to be correctly specified. Specifically, the adjusted estimator of the CDF for each study arm a ∈ {0, 1} is based on the following arm-specific, proportional odds working model for the cumulative probability of the outcome given the baseline variables: logit {P (Y j|A = a, X)} = α j + β X, for each j = 1, . . . , K − 1 with parameters α 1 , . . . , α K−1 and β; the model for the other study arm is the same but with a separate set of parameters. Each model is fit using data from the corresponding study arm, yielding two working covariate-conditional CDFs (one per arm). For each arm, the estimated marginal CDF is then obtained by averaging the corresponding conditional CDF across the empirical distribution of baseline covariates pooled across the two study arms. The above methods are implemented in an accompanying R package, drord.
The validity (i.e., consistency and asymptotic normality) of the adjusted CDF estimator given above in no way relies on correct specification of the aforementioned working model.
This property also holds for the estimators of Vermeulen et al. (2015) and Díaz et al. (2016) .
We consider three treatment effect estimands in the time-to-event setting, all of which are interpretable under violations of a proportional hazards assumption. To define these
This article is protected by copyright. All rights reserved. 8 estimands, we let T be a time-to-event outcome, C be a right-censoring time, A be a treatment indicator, and X be a collection of baseline covariates.
Estimand 1: Difference in restricted mean survival times (RMSTs). The RMST is the expected value of a survival time that is truncated at a specified time τ (Chen and Tsiatis, 2001; Royston and Parmar, 2011) , that is,
Estimand 2: Survival probability difference (also called risk difference, RD).
Difference between arm-specific probabilities of survival to a specified time t * , that is,
Estimand 3: Relative risk (RR). Ratio of the arm-specific probabilities of survival to a specified time t * , that is,
Analogous to the ordinal outcome case, estimators of these parameters can be constructed from estimators of the survival functions for each arm. One approach to constructing adjusted estimators, used here, involves discretizing time and then: (i) estimating the time-specific hazard conditional on baseline variables, (ii) transforming to survival probabilities using the product-limit formula, and (iii) marginalizing using the estimated covariate distribution (pooled across arms). The adjusted approach as implemented here (and elsewhere-see references below) has two key benefits relative to unadjusted alternatives such as using the unadjusted Kaplan-Meier estimator (Kaplan and Meier, 1958) . First, the adjusted estimator's consistency depends on an assumption of censoring being independent of the outcome given study arm and baseline covariates (C ⊥ ⊥ T | A, X), rather than an assumption of censoring in each arm being independent of the outcome marginally (C ⊥ ⊥ T | A). The former may be a more plausible assumption. Second, in large samples and under regularity conditions, the adjusted estimator is at least as precise as the unadjusted estimator in the case that censoring is completely at random, i.e., that in each arm a ∈ {0, 1}, C ⊥ ⊥ (T, X)|A = a.
Covariate adjusted estimators for time-to-event outcomes include, e.g., Chen and Tsiatis (2001) ; Rubin and van der Laan (2008b); Moore and van der Laan (2009b); Stitelman et al.
In each setting below, we simulated trials with 1:1 randomization to the two arms and total enrollment of n = 100, 200, 500, and 1000. In each case, 1000 trials were simulated. In each simulated trial, the n participant data vectors are i.i.d. draws from a population data generating distribution that depends on the setting.
Data for simulated control-arm participants were simulated based on real data, while data for simulated treatment-arm participants were simulated by modifying the outcome distribution observed in the real data to achieve a desired level of treatment effect (details below). In all simulation settings, covariates are approximately, equally prognostic across arms and there is no treatment effect heterogeneity. lower and upper estimates were reported for each age group-specific outcome probability; we used the average of these within each age group to define our data generating distributions.
For the hospitalized COVID-19 positive population, the resulting outcome probabilities for each age group are listed in Table 1 .
[ Table 1 about here.]
We separately considered two types of treatment effects in our data generating distributions: no treatment effect and an effective treatment. For the former, we randomly sampled n age-outcome pairs according to the distribution in Table 1 and then independently assigned study arm with probability 1/2 for each arm.
For the latter case (effective treatment), we randomly generated control arm participants as in the previous paragraph and randomly generated treatment arm participants by modifying the values in Table 1 to achieve a desired level of true treatment effect. Specifically, the probabilities P(ICU admission and survived | age) in column 4 were proportionally reduced, while P(No ICU admission and survived | age) were increased by an equal amount.
The probabilities of death given age in column 3 were not changed. This modified table corresponds to a scenario where the treatment has no effect on the probability of death but decreases the odds of ICU admission among those who survive by the same relative amount in each age category.
The aforementioned relative reduction (and the resulting treatment effect) was separately selected for each sample size n = 100, 200, 500, 1000. For the DIM estimand at each sample size, we selected treatment effect sizes such that a t-test using the unadjusted estimator would achieve roughly 50% and 80% power, respectively, to reject the null hypothesis of no treatment effect. For sample size n = 100, we instead set this relative reduction to achieve roughly 30% and 40% power, respectively, because there did not exist a relative reduction that achieved 80% power at this sample size. The same data generating distributions used for the DIM estimand were also used for the MW and LOR estimands.
In our simulations, we used the adjusted estimator described in Section 3.2, where age is coded using the categories in Table 1 . Specifically, these age categories are included as the main terms in the linear parts of the proportional odds working models.
For simplicity, for binary and ordinal outcomes we simulated trials with no missing data.
However, the methods we used can adjust for missing outcomes. (See Appendix B of the Supporting Information.) 4.1.3 Time-to-event outcomes. In this simulation, the outcome is time from hospitalization to the first of intubation or death, and the predictive variables used are sex, age, whether the patient required supplemental oxygen at ED presentation, dyspnea, hypertension, and the presence of bilateral infiltrates on the chest x-ray. We focus on RMST 14 days after hospitalization, and the RD of remaining intubation-free and alive 7 days after hospitalization.
Our data generation distribution is based on a database of over 500 patients hospitalized at
Weill Cornell Medicine New York Presbyterian Hospital prior to March 28, 2020. Outcome information was known for all patients through at least day 14. Patient data were re-sampled with replacement to generate 1000 datasets, each of the sizes n = 100, 200, 500, and 1000. For each dataset, a hypothetical treatment variable was drawn from a Bernoulli distribution with probability 0.5 independently of all other variables. Positive treatment effects were simulated by adding an independent random draw from a χ 2 distribution to each participant's outcome in the treatment arm; we used χ 2 distributions with 2 and 4 degrees of freedom, respectively, to generate two different effect sizes. These data generating distributions correspond to a difference in RMST of 0.507 and 1.004 at 14 days, and an RD of 3.5% and 8.8% at seven days, respectively. Five percent of the patients were selected at random to be censored, and their censoring time was drawn from a uniform distribution on {1, . . . , 14}.
We compare the performance of the unadjusted, Kaplan-Meier-based estimator to the covariate adjusted estimator. These estimators are defined in Sections 4 and 6 of (Díaz et al., 2019) , respectively, and implemented in the R package survtmlerct. Wald-type confidence intervals and corresponding tests of the null hypothesis of no effect are reported.
We compare the type I error and power of tests of the null hypothesis H 0 of no treatment effect based on unadjusted and adjusted estimators, both within and across estimands. For each estimand, we also compare the bias, variance, and mean squared error of the unadjusted and the adjusted estimators.
We approximate the relative efficiency of the unadjusted relative to the adjusted estimator by the ratio of the mean squared error of the latter to the mean squared error of the former. In all of our simulation studies, this is similar to the corresponding ratio of variances, since the bias squared was always much smaller than the variance. One minus this relative efficiency is approximately the proportion reduction in sample size needed for a covariate adjusted estimator to achieve the same power as the unadjusted estimator (van der Vaart, 1998, pp.
110-111).
For binary and ordinal outcomes, we present results that use the nonparametric BCa bootstrap (Efron and Tibshirani, 1994) for confidence intervals and hypothesis tests. We used 1000
replicates for each BCa bootstrap confidence interval. While we recommend 10000 replicates in practice, the associated computational time was too demanding for our simulation study.
Nonetheless, we expect similar or slightly better performance with an increased number of bootstrap samples. Results that use closed-form, Wald-based inference methods are presented in Appendix C of the Supporting Information.
For time-to-event outcomes, we used Wald-based confidence intervals since these made the computations faster compared to the BCa bootstrap method. Table 2 compares the performance of the unadjusted and adjusted estimators when the outcome is death or survival with ICU admission, and the estimand is the risk difference.
The relative efficiency of unadjusted method relative to the adjusted method varied from 0.92 to 0.86. This is roughly equivalent to needing 8-14% smaller sample size when using the adjusted estimator compared to the unadjusted estimator, to achieve the same power.
Type I error of the covariate adjusted method was comparable to that of the unadjusted method. The covariate adjusted method achieved higher power across all settings. Absolute gains in power varied from 5% to 14%.
[ (DIM), 6-15% (MW), and 7-15% (LOR) smaller sample sizes, respectively, when using the adjusted estimator compared to the unadjusted estimator, to achieve the same power.
Type I error control of the covariate adjusted methods was comparable to that of the unadjusted methods. The covariate adjusted methods achieved higher power across all settings.
Absolute gains in power varied from 1% to 6% for the DIM, 1% to 6% for the MW estimand, and 1% to 5% for the LOR.
[ To evaluate the importance of adjusting for multiple baseline variables, we also evaluated an adjusted RMST estimator that only adjusts for age and sex; see Appendix C of the Supporting Information. The gains of the covariate adjusted methods relative to the unadjusted methods were small, with absolute gains in power of approximately 0%-1% and relative efficiency ranging from 0.96 to 1.00. These results suggest that there can be a meaningful benefit from adjusting for prognostic covariates beyond just age and sex.
We also considered the risk difference (RD) estimand; see Appendix C of the Supporting Information. The results (when adjusting for age and sex along with the four other variables described in Section 4.1.3) are qualitatively similar, except with slightly smaller precision gains, to those for the RMST in Table 6 .
The Type I error in Table 6 for the unadjusted estimator at n = 100 is 1.1%, much smaller than the nominal level. We conjecture this is due to the Wald-type asymptotic inference procedure being a poor approximation at this sample size. This is illustrated by the fact that the scaled variance at n = 100 is much smaller than the scaled variance at n = 1000.
Similar comments apply to some of the results in Appendix C of the Supporting Information.
[ Table 6 about here.]
Recommendations below that do not reference related work or our simulation results are based on the authors' experience.
(1) Estimand when the outcome is ordinal. If a utility function can be agreed upon to transform the outcome to a score with a clinically meaningful scale, then we recommend using the difference between the transformed means in the treatment and control arms.
Otherwise, we recommend using the unweighted difference between means or the Mann-Whitney estimand. We recommend against estimating log odds ratios, since clinical interpretation requires considerable nuance (Díaz et al., 2016) and the corresponding estimators (even unadjusted ones) can be unstable at small sample sizes (Appendix C of the Supporting Information).
(2) Covariate adjustment. Based on our simulations, we recommend adjustment for prognostic baseline variables to improve precision and power. In the context of COVID-19 trials, we expect improvements to be substantial since there are already several known prognostic baseline variables, e.g., age and co-morbidities. We did not consider it here, but one may consider using an algorithm for variable selection from a prespecified list of candidate variables; see, e.g., Tsiatis et al. (2008) (4) Information monitoring. We summarize our recommendations below and give more detailed recommendations in Appendix E of the Supporting Information. First, consider trials without interim analyses. Information monitoring can be used to determine how long the trial will continue. Before starting the trial, one computes the information level required to achieve the desired power at a fixed alternative. Then during the trial, the accrued information (defined as the reciprocal of the estimator's variance) is monitored and the trial is continued until the required information level is surpassed. In this way, covariate adjustment can lead to faster trials even when the treatment effect is zero (i.e., when the null hypothesis is true); this may be more ethical in settings where it is desirable to stop as early as possible to avoid unnecessary exposure to side effects.
Next, consider trials with interim analyses. For the estimands and adjusted estimators that we considered for continuous, binary, or ordinal outcomes, one can directly apply the group sequential, information-based designs of Scharfstein et al. (1997) ; Turnbull (1997, 1999) . This can be done as long as data from pipeline participants, that is, participants who enrolled but have not been in the study long enough to have their primary outcomes measured, are not used when conducting interim analyses.
This is because the key property needed to apply the aforementioned group sequential designs, called the independent-increments property, is only guaranteed to hold if pipeline participant information is not used. There are methods for modifying the estimators through orthogonalization so that the independent increments property holds even when using pipepline participant information (and similarly when the outcome is time-toevent), but this was not simulated in our paper and is an area of future research.
(5) Plotting the CDF and the probability mass function (PMF) when the outcome is ordinal. Regardless of which treatment effect definition is used in the primary efficacy analysis, we recommend that the covariate adjusted estimate of the PMF and/or CDF of the primary outcome be plotted for each study arm when the outcome is ordinal.
Pointwise and simultaneous confidence intervals should be displayed (where the latter account for multiple comparisons). This is analogous to plotting Kaplan-Meier curves for time-to-event outcomes, which can help in interpreting the trial results. For example, Figure 1 shows covariate adjusted estimates of the CDF and PMF for a data set from our simulation study. From the plots, it is evident that the effect of the treatment on the ordinal outcome is primarily through preventing ICU admission, with no impact on probability of death.
(6) Missing covariates. We do not recommend adjusting for baseline covariates that are expected to have high levels of missing data. For the situation with low levels of missing data, it is simplest to singly impute missing values based only on data from those baseline covariates that were observed. To ensure that treatment assignment is independent of all baseline covariates (including imputed ones), no treatment or outcome information should be used in this imputation. For performing inference based on the bootstrap, the bootstrap sample should be drawn first, then missing covariates should be imputed.
(7) Missing ordinal outcomes. We recommend handling missing ordinal outcomes using methods that are robust to model misspecification, such as the one described in Appendix B of the Supporting Information. Compared to a complete-case analysis, these approaches weaken the assumption of missingness from missing completely at random to missing at random. Nevertheless, these methods are still subject to bias in the presence of unmeasured factors that influence the study outcome and missingness probability.
Therefore, trials should seek to minimize the likelihood of missing outcomes and employ relevant sensitivity analyses to address robustness of studying findings to assumptions about missing data (National Research Council, 2010) ; this applies to all outcome types.
(8) Loss to follow up with time-to-event outcomes. We recommend accounting for loss-to-follow-up using methods that are robust to model misspecification such as those described in Benkeser et al. (2018) ; Díaz et al. (2019) . These methods rely on a potentially more plausible condition on the censoring distribution than do unadjusted methods, as discussed in Section 3.3. The covariate adjusted estimator that we used for the restricted mean survival time in the time-to-event setting is robust to misspecification of one of its working models (as long as the other is correctly specified) under censoring being independent of the outcome given baseline variables and arm assignment.
In our simulated data generating distributions, the correlations between baseline variables and the outcome were similar for each arm. Because we designed our data generating distributions to mimic the correlations between baseline variables and outcomes from observational study data, these may be reasonable approximations to the control arm (i.e., standard of care) of a trial involving the same population. If the treatment arm in such a trial has similar correlations between baseline variables and the outcome, then the precision gains in such a trial may be similar to those in our simulations. However, if the treatment arm in such a trial has smaller correlations between baseline variables and the outcome, then the precision gains in such a trial may be smaller than those in our simulations.
Adjusting for baseline variables beyond just age and sex led to substantial improvements in precision in our simulations involving time-to-event outcomes, as described in Section 5.3.
For the other outcome types, i.e., binary and ordinal, our data generating distributions only had one baseline variable, age; this is all that was available in the CDC data, so we were not able to investigate the value added by adjusting for more variables.
The described methods for binary and ordinal outcomes can be adapted to handle the case where stratified randomization on a subset of the measured baseline covariates is used.
Specifically, one can apply the general method of Wang et al. (2019), which gives a formula for consistently estimating the asymptotic variance of covariate adjusted estimators under stratified randomization. This method can be applied to any M-estimator, and therefore applies to the estimators that we considered for binary and ordinal outcomes. To the best of our knowledge, it is an open problem to prove consistency and asymptotic normality for
This article is protected by copyright. All rights reserved.
The data and code needed to reproduce the simulations for ordinal and binary outcomes are available on GitHub at https://github.com/mrosenblum/COVID-19-RCT-STAT-TOOLS.
These data were derived from the following resource available in the public domain: (CDC COVID-19 Response Team, 2020). The code for the survival simulations is also included in that repository. However, because the simulation is based on private data from Weill Cornell Medicine (research data not shared), the results of the survival simulation reported in the manuscript are not reproducible based on the available code. We provide a simulated dataset (not based on real data) with the same structure of the real dataset and that can be used to run the simulation code.
https://biostats.bepress.com/jhubiostat/paper285 . Zhang, M., Tsiatis, A. A., and Davidian, M. (2008) . Improving efficiency of inferences in randomized clinical trials using auxiliary covariates. Biometrics 64, 707-715.
The R package drord is available on Github at https://github.com/benkeser/drord, and the R package survtmlerct is available on Github at https://github.com/idiazst/survtmlerct. Figure 1 . Example figures illustrating covariate adjusted estimates of the PMF and CDF by study arm with pointwise (black) and simultaneous (gray) confidence intervals. "ICU" represents survival and ICU admission; "None" represents survival and no ICU admission. Table 2 Results for the binary outcome and risk difference (RD) estimand in the hospitalized population. BCa bootstrap is used for confidence intervals and hypothesis testing. "Effect" denotes the true estimand value; "MSE" denotes mean squared error; "Rel. Eff." denotes relative efficiency which we approximate as the ratio of the MSE of the estimator under consideration to the MSE of the unadjusted estimator. In each block of six rows, the first two rows involve no treatment effect and the last four rows involve a benefit from treatment. MSE and variance are scaled by n; bias is scaled by n 1/2 . Table 3 Results for the ordinal outcome and difference in means (DIM) estimand in the hospitalized population. BCa bootstrap is used for confidence intervals and hypothesis testing. "Effect" denotes the true estimand value; "MSE" denotes mean squared error; "Rel. Eff." denotes relative efficiency which we approximate as the ratio of the MSE of the estimator under consideration to the MSE of the unadjusted estimator. In each block of six rows, the first two rows involve no treatment effect and the last four rows involve a benefit from treatment. MSE and variance are scaled by n; bias is scaled by n 1/2 . Table 4 Results for ordinal outcome and Mann Whitney (MW) estimand in the hospitalized population. BCa bootstrap is used for confidence intervals and hypothesis testing. "Effect" denotes the true estimand value; "MSE" denotes mean squared error; "Rel. Eff." denotes relative efficiency which we approximate as the ratio of the MSE of the estimator under consideration to the MSE of the unadjusted estimator. In each block of six rows, the first two rows involve no treatment effect and the last four rows involve a benefit from treatment. MSE and variance are scaled by n; bias is scaled by n 1/2 . Table 5 Results for the ordinal outcome and log-odds ratio (LOR) estimand in the hospitalized population. BCa bootstrap is used for confidence intervals and hypothesis testing. "Effect" denotes the true estimand value; "MSE" denotes mean squared error; "Rel. Eff." denotes relative efficiency which we approximate as the ratio of the MSE of the estimator under consideration to the MSE of the unadjusted estimator. In each block of six rows, the first two rows involve no treatment effect and the last four rows involve a benefit from treatment. MSE and variance are scaled by n; bias is scaled by n 1/2 .
|
Obesity and smoking are lifestyle factors. Studies have reported the correlation between obesity and severe illness or related death of COVID-19 [1] [2] [3] . For smoking, its relationship with the risk of severe COVID-19 is not clear 2, 4, 5 ; however, these studies considered smoking as binary or categorical variables only without any consideration on the heaviness or duration. The Centre for Disease Control and Prevention of US suggests that people with obesity and smoking are at increased risk of COVID-19 severe illness 6 Mendelian randomization (MR) uses exposure-associated genetic variants as instrumental variables to assess the causality between exposures and outcomes 7 . As genetic variants are randomly allocated at conception, MR resembles a randomized controlled trial and is less subject to confounding than observational studies. The publicly available genome-wide association studies (GWAS) summary statistics provide valuable resources for assessing the causality between lifestyle factors and the risk of COVID-19 severe illness. A MR study found evidence that both body mass index (BMI) and smoking had a causal effect on the risks of COVID-19 with respiratory failure and of hospitalization with COVID-19; however, the estimated causal effects had limited precision 8 .
This study aimed to investigate the causality between four lifestyle factors, namely BMI, smoking, alcohol consumption and physical activity, and severe illness of COVID-19 using the two-sample MR approach 9 .
Summary-level data were obtained from two GWAS analyses conducted by the COVID-19
Host Genetic Initiative 10 (Release 4 in September 2020): 1) 2972 very severe respiratory confirmed COVID cases, which were defined as hospitalized laboratory confirmed SARS-CoV-2 infection (RNA and/or serology based) with death or respiratory support, and hospitalization with COVID-19 as primary reason for admission, compared with 284472 population controls; and 2) 6492 hospitalized confirmed COVID cases, which were defined as hospitalized laboratory confirmed SARS-CoV-2 infection (RNA and/or serology based) and All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted October 21, 2020. . https://doi.org/10.1101/2020.10.19.20215525 doi: medRxiv preprint 4 hospitalization due to corona-related symptoms, compared with 1012809 controls. The majority (≥90%) of the participants included in the GWAS analyses were of European ancestry.
Details of the GWAS analyses can be found at https://www.covid19hg.org/.
Genome-wide significant genetic variants identified from GWAS were selected as instrumental Physical activity: five variants identified to be associated with accelerometer-measured overall physical activity (measured as average vector magnitude) in a sample of ~91000 UK Biobank individuals, explaining ~0.2% variation in overall physical activity 14 . Proxies with a minimum linkage disequilibrium r 2 =0.8 were used for genetic variants that were unavailable in the COVID-19 data sources (two, one and two variants for BMI, alcohol consumption and physical activity, respectively; one alcohol consumption variant had no proxy available, so it was not included in analysis).
The statistical power was calculated using the proportion of variation in the lifestyle risk factor explained by the genetic instrumental variables, sample size of the COVID-19 GWAS, and the method proposed by Burgess 15 .
The main analyses were performed using inverse-variance weighted (IVW) method under a random-effects model 16 , which assumes that all genetic variants are valid instrumental variables, or any horizontal pleiotropy must be balanced. For each risk factor, the reported odds ratio (OR) on COVID-19 risk was for per standard deviation increase in the genetically predicted value. Leave-one-out analyses, i.e., applying IVW after removing each genetic variant in turn, were performed to assess the influence of each genetic variant on the results. All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in Sensitivity analyses were performed using MR-Egger regression 17 , weighted median method 18 and weighted mode method 19 , which relax some MR assumptions and allow some genetic instrumental variables to be invalid, but are less powerful than IVW method. The more consistency across the point estimates of the methods, the greater the evidence supporting the causal effect of the investigated risk factors on COVID-19 severe illness.
The analyses were conducted using the TwoSampleMR R package 20 . All statistical tests were two-sided. Results with a nominal P-value <0.05 were considered statistically significant.
For BMI, lifetime smoking, alcohol consumption and physical activity, respectively, this study has 80% statistical power at the significance level of 0.05 to detect an OR of 1 suggesting that the observed associations were not driven by any single genetic variant (Data not shown). There was evidence of heterogeneity in the genetic variant-exposure effects for BMI and alcohol consumption, but not for lifetime smoking and physical activity ( Table 2) .
From the sensitivity analyses (Figure 1 ), causal effect estimates with the same direction across the methods were found for all the investigated risk factors, though some estimates were with wider 95% CIs. Notably, positive associations (greater than the IVW estimates and the detectable effect sizes under 80% power) were found for alcohol consumption using methods All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted October 21, 2020. . https://doi.org/10.1101/2020.10.19.20215525 doi: medRxiv preprint other than the IVW method. Tests of MR-Egger regression intercepts suggested there was no evidence of directional pleiotropy of the genetic instrumental variables, with exception that weak evidence were found for BMI and alcohol consumption with severe respiratory COVID-19 (both P<0.03; Table 3 ).
Using a two-sample MR approach, this study found evidence that BMI and smoking have a causal effect on increased risk of COVID-19 severe illness. These findings were the same as those by Ponsford et al. 8 For the first time, this study provided evidence that physical activity causally decreases the risk of COVID-19 severe illness. However, only five genetic instrumental variables were used, and they explained ~0.2% variation in physical activity only; the estimates for the causal effects were of reduced precision. The findings are perhaps more important in terms of qualifying causality than quantifying the causal effects.
As to alcohol consumption, this study had sufficient power to detect the observed IVW OR for COVID-19 hospitalization, but not for severe respiratory COVID-19. Interestingly, results from the MR-Egger regression, weighted median and weighted mode methods supported the association between genetically predicted alcohol consumption and COVID-19 severe illness.
The consistency across the three methods suggest that the observed associations are unlikely to be biased by violated assumptions of a certain method. The three methods allow some genetic instrumental variables to be invalid. From the MR-Egger regression analyses, the intercepts were estimated to be negative, and the one for severe respiratory COVID-19 was even different from zero. Taking all these observations together, alcohol consumption might have a positive causal effect on COVID-19 severe illness, while the genetic instrumental All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted October 21, 2020. . https://doi.org/10.1101/2020.10.19.20215525 doi: medRxiv preprint variables overall might have negative directional pleiotropy, so the IVW results were not different from null.
Limitations of this study included that there might be bias in the causal effect estimates, as there was sample overlap between the lifestyle factors GWAS and COVID-19 GWAS, e.g., UK biobank participants were included in the GWAS of COVID-19 hospitalization. However, given the proportions of COVID-19 cases in the GWAS analyses were low, any bias must be minimal 22 .
In conclusion, this study finds evidence that BMI and smoking causally increase and physical activity causally decreases the risk of COVID-19 severe illness. All these lifestyle risk factors are modifiable, so they could be targeted to reduce severe illness of COVID-19. This study highlights the importance of maintaining a healthy lifestyle in protecting from COVID-19 severe illness. The findings also have a profound public health value -a healthy lifestyle could be helpful for fighting against the COVID-19 pandemic.
All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in
The copyright holder for this this version posted October 21, 2020. . https://doi.org/10.1101/2020.10.19.20215525 doi: medRxiv preprint Figure 1 Odds ratios (OR) and 95% confidence intervals (CI) of the genetically predicted lifestyle factors with COVID-19 severe illness across methods OR and 95% CI were expressed as per standard deviation increase in genetically predicted levels in body mass index (BMI), lifetime smoking measure, alcohol consumption (log-transformed standard drinks per week) and accelerometer-measured physical activity. The plots were righttruncated to better present the confidence intervals.
|
70%-90% of healthy adults and only 30%-40% of members of high-risk groups (infants, elderly individuals, immunocompromised individuals, and patients with chronic underlying diseases) [2] . Second, the influenza viruses incorporated into the vaccines are selected on the basis of surveillance data of recent prevalent strains. This best-guess method of predicting future circulating strains cannot account for and, thus, does not protect against unexpected strains or unanticipated pandemics. Third, the 4-6-month period required for formulating a new virus strain into vaccines and ramping up large-scale production of the new vaccines is too long to meet the demand during a crisis situation created by a rapidly moving pandemic. Finally, adverse effects associated with current vaccines and the technical and regulatory hurdles for producing a new vaccine should not be underestimated.
There currently are 4 antiviral drugs for the treatment and prophylaxis of influenza. These drugs fall into 2 categories: M2 inhibitors (amantadine and rimantadine) and neuraminidase (NA) inhibitors (zanamivir and oseltamivir). Although these compounds can be used as chemoprophylactics, they are not substitutes for vaccination. Rather, they are used as adjuncts in controlling outbreaks of influenza virus infection. To be effective, the antiviral drugs have to be administered within the first 24-48 h after the development of symptoms. In addition, adverse effects, compliance prob-lems, limited supply, and high costs preclude the widespread use of these antiviral drugs. Of still greater concern is the emergence of stable and transmissible drug-resistant strains of influenza virus [3] . Because of widespread resistance to rimantadine and amantadine, the Centers for Disease Control and Prevention advised physicians in January 2006 to stop prescribing these drugs for seasonal influenza virus infections. Similarly, influenza viruses that are resistant to NA inhibitors have been isolated from clinical samples [4] . This raises a serious concern about the widespread use of NA inhibitors.
The imminent threat of a global influenza pandemic caused by the avian influenza A(H5N1) virus demands the rapid development of new vaccines and the stockpiling of existing antiviral drugs. However, the limited efficacy and scope of current vaccines and antiviral drugs also demands the development of fundamentally new strategies to control influenza epidemics and pandemics. The natural reservoirs of influenza virus are aquatic birds and wild fowl, in which many different virus strains are circulating at any one time [5] . Viruses frequently are transmitted from wild species to domestic birds, causing devastating epidemics and pandemics in poultry populations. Avian viruses usually require adaptation before they can infect humans, although the current avian influenza A(H5N1) virus has been documented to infect humans directly and to cause severe disease. Annual influenza epidemics in human populations, including the documented outbreaks of 1997, 2003, and 2005, as well as the worldwide pandemics of 1918, 1958, and 1968, have been traced to avian sources [5, 6] . Influenza epidemics and pandemics are likely to intermittently cause havoc in poultry and human populations for the foreseeable future, unless effective long-term control can be achieved.
Since domestic poultry serves as a key link between the natural reservoir of influenza viruses and epidemics and pandemics in human populations, an effective measure to control influenza would be to eliminate or reduce influenza virus infection in domestic poultry, to reduce the probability that avian influenza virus variants with pandemic-causing potential will arise. One approach to developing transgenic poultry that are resistant to influenza viruses is to use RNA-interference (RNAi) technology. The development and distribution of influenza-resistant poultry represents a fundamentally new strategy for controlling influenza epidemics and pandemics at their origins, in both poultry and human populations. The strategy should complement current approaches of influenza control through the use of vaccines and antiviral drugs.
RNAi is an evolutionarily conserved process in metazoans by which double-stranded RNA directs sequence-specific degradation of mRNA. Studies have shown that RNAi can be triggered by the introduction of synthetic 21-nt RNA duplexes [7] , often referred to as "short interfering RNA" (siRNA), or by the expression of RNA duplexes in a hairpin structure [8] , referred to as "short hairpin RNA" (shRNA), which can be processed into siRNA by cellular RNA endonucleases. RNAi has been shown to be effective in interfering with viruses such as HIV, hepatitis B virus, respiratory syncytial virus, poliovirus, rhinovirus, severe acute respiratory syndrome-associated coronavirus, and dengue virus in cell culture and, in a few cases, in animals.
Directly relevant to influenza control, studies have shown that siRNA that is specific for conserved regions of influenza virus genes potently inhibit replication of a broad spectrum of influenza viruses in cell lines, chicken embryos, and mice [9, 10] . Stable expression of influenza-specific shRNA via a lentiviral vector in a cell line renders the cells refractory to influenza virus infection [10] . Similarly, introduction of the same lentiviral vector into the mouse lung results in significant inhibition of virus production in vivo. Together, these findings suggest the possibility of developing influenza-resistant poultry by transgenic expression of influenza-specific shRNA.
Lentivirus has been the vector of choice for stable expression of shRNA in cells and animals [11] . Recently, lentivirus-mediated transgenesis has been shown to be very efficient in birds. More than 70% of founder birds contain the vector sequences, and 4%-45% of founder birds give rise to germline transmission [12, 13] . In contrast, transgenesis in chickens by direct DNA injection or infection with oncoretroviral vectors is much less efficient, often requiring the screening of thousands of birds to obtain a single germline transmission [14] . Furthermore, unlike transgene expression from oncoretroviral vectors, which is often silenced by epigenetic modifications during early ontogeny, transgene expression from integrated lentiviral vectors has been found to be stable for 4 generations (B.B.S. and C.L., unpublished data).
A critical consideration in developing influenza-resistant poultry is the prevention of the emergence of resistant viruses. One approach is to use siRNA targeting the conserved regions of influenza virus genes, because these regions either do not change or change much less frequently than do other regions, probably owing to structural and/or functional constraints. Targeting the highly conserved region potentially allows the siRNA to remain effective despite antigenic drift and shift. It also has the potential to reduce the emergence of viable resistant variants. Another approach is to simultaneously express multiple shRNAs. The mutation rate of influenza virus is estimated to be mutations/nucleotide/infection cycle [15] . If 4 Ϫ5 1.5 ϫ 10 shRNAs are expressed simultaneously, the probability of the emergence of a resistant virus can be reduced to 1 resistant virus/ virions. This probability can be assessed by use 21 3 ϫ 10 of the following example. The United States produced ∼9 billion broiler chickens in 2006. For a resistant virus to arise, each chicken would have to produce infectious virions. Un- 11 3 ϫ 10 der the assumption that 1 infected cell produces new 3 3 ϫ 10 infectious virions, 10 8 cells/chicken would have to be infected. Thus, lentiviral vectors expressing у4 shRNAs specific for the conserved regions of the influenza virus genome should be used for transgenic poultry production.
The development and distribution of influenza-resistant poultry for the control of influenza in both poultry and human populations will be likely to face significant technical, logistical, and social challenges. First, technologies for producing influenza-resistant poultry have not been demonstrated experimentally. Although cells can be rendered influenza resistant and transgenic quails have been efficiently generated by use of reporter genes, the expression of у4 shRNAs from the same lentiviral vector has not been demonstrated. A proof of the concept of influenza resistance in a chicken or bird model, such as quail, also remains to be demonstrated.
Second, to be commercially viable, influenza-resistant chickens and ducks have to be produced in commercial pedigree lines. Introduction of transgenes into the pedigree lines would require the cooperation of commercial entities. Field testing of influenza resistance in transgenic chickens and ducks will require regulatory approval and close monitoring. In addition, to generate a large number (i.e., millions) of influenza-resistant chickens or ducks for commercial distribution from the same founder will be likely to take years and to require extremely stable transmission and expression of the transgene from generation to generation.
Third, transgenic poultry will be regarded as genetically modified organisms (GMOs). Since transgenic siRNA sequences are from the influenza virus genome, they should not pose any risk of influenza viruses acquiring novel or alien sequences. Resistance to GMO-based food in industrialized countries may prevent their introduction into the poultry industry in the near future. However, Asian countries, where frequent influenza outbreaks originate and where variants of epidemic-and pandemic-causing influenza virus usually arise, are likely to be more receptive to the introduction of influenza-resistant poultry. Influenza-resistant chickens and ducks may prove particularly effective in hindering the development and spread of disease in Asian countries where poultry farming is usually dispersed and is done in a family's backyard. In the long term, the introduction of influenza-resistant poultry, even in only some Asian countries, might reduce the frequency of influenza epidemics and pandemics in poultry populations, which in turn should reduce the frequency of global influenza epidemics and pandemics in human populations, leading to significant economic, social, and health benefits.
The development and distribution of influenza-resistant poultry represents a proactive strategy for controlling influenza epidemics and pandemics at their origin, in both poultry and human populations, and complements current approaches for influenza control by means of vaccines and antiviral drugs. However, the development and distribution of influenza-resistant poultry will be likely to require years and maybe even decades of research and development, as well as close collaboration among different entities. Nevertheless, the potential long-term economic and health benefits should justify the initial investment in research and development.
|
In epidemiology, incubation period is the time between the infection of an individual by a pathogen and the manifestation of symptoms, while generation time is defined as the time between the infection of a primary case and its secondary cases (Fine, 2003; Svensson, 2007) .
Both are vital clinical characteristics that depict an epidemic and are essential for policy making. For example, a good understanding of incubation period offers an optimal length of quarantine, and a good understanding of generation time is essential in estimating the transmission potential of a disease measured by the basic reproduction number R 0 (Farewell et al., 2005; Wallinga and Lipsitch, 2007; Nishiura, 2010) .
In most of literature, such as Li et al. (2020) and Guan et al. (2020) , the distribution of incubation period is either described through a parametric model, for example, log-normal and Weibull, or, its empirical distribution based on the observed incubation period from contact-tracing data. However, contact-tracing data is usually difficult to obtain, and can be highly influenced by the individuals' judgment on the possible date of exposure rather than the actual date of exposure which, in turn, might not be accurately monitored and determined leading to significant errors (Cowling et al., 2007) .
An alternative approach to study incubation period is to take advantage of the mechanism of truncation or censoring. Lui et al. (1988) ; Struthers and Farewell (1989) ; De Gruttola and Lagakos (1989) ; Kuo et al. (1991) estimated incubation distribution of contagious diseases using external truncation or censoring information. Kuk and Ma (2005) studied the incubation period of SARS by deconvolution, but the proposed method was only feasible for the disease which is non-infectious during incubation period, which is not the case for . It also assumed that the ability of infectiousness is uniform during the infectious period, which is a strong assumption. In the studies of Lessler et al. (2009); Reich et al. (2009), double censoring was used to characterize the problem caused by daily reports rather than Biometrics, 000 0000 continuous observed symptoms onset time. Nishiura and Inaba (2011) used 72 confirmed imported cases who traveled to Japan from Hawaii during the early phase of the 2009 H1N1 pandemic to estimate the incubation by addressing censoring and infection-age.
For COVID-19, Backer et al. (2020) and Linton et al. (2020) used confirmed cases detected outside Wuhan to estimate the distribution of the incubation by interval-censoring likelihood.
In their studies, for each selected case, a censored interval for incubation period was obtained by travel histories and dates of symptoms onset, and the distribution of incubation was then estimated by fitting censored intervals into Weibull, Gamma and log-normal. However, such estimations may lead to biased estimations of incubation period due to the biased sampling issues. Qin et al. (2020) adopted the theory from renewal process and carefully selected the studying cohort to overcome the biased sampling problems but fitted a continuous parametric model with discrete observations, while the discreteness of data is in fact a sort of interval censoring caused by daily reports.
To the best of our knowledge, generation time is usually directly estimated by the time difference between symptoms onset of successive cases in a chain of transmission rather than the actual time of infection, that is, the serial interval. This is because it is challenging to obtain both the corresponding infection dates of the primary case and its secondary cases in a chain of transmission, while the dates of symptoms onsets are relatively easier to obtain.
However, the distribution of serial interval may be biased for estimating generation time, especially when the disease is infectious during incubation, in which the variance could be over-estimated (Britton and Scalia Tomba, 2019). As a result, the subsequent quantities estimated based on the generation time is biased. For example, the basic reproduction number, indicating the spreading ability of an infectious disease, would be under-estimated.
Note that COVID-19 is incubation-infectious, hence the estimation of generation time simply based on observed serial intervals is not consistent.
This article is protected by copyright. All rights reserved.
To overcome the issues aforementioned, in this paper we estimate the distribution of incubation period using the well-studied renewal process where there exists a censoring event within the incubation period. ; Vardi ( , 1989 ) discussed nonparametric maximum likelihood estimation based on length-biased sampling and renewal process with incomplete renewal data, and further the multiplicative censoring problem. A brief review can be found in Qin (2017) . Issues related to the length-biased sampling and interval censoring sampling are both taken into consideration in the estimation of incubation distribution in this study. We have shown that under mild assumptions, parameters in the incubation distribution are identifiable and enjoy desirable asymptotic properties. Furthermore, a consistent estimator for the distribution of generation time is also proposed based on incubation period and observed serial interval for incubation-infectious and incubation-noninfectious diseases respectively. Our approaches increase available sample size and utilize censored information in the early phase of an epidemic outbreak.
The rest of this paper is organized as follows. Section 2 describes the motivation data.
In Section 3, we propose algorithms to estimate the distribution of incubation period and
show that under mild assumptions the model parameters are identifiable and enjoy desirable asymptotic properties. In Section 4, we propose algorithms to estimate the distribution of generation time. Simulation studies are performed in Section 5, and the analyzed results to the current outbreak of COVID-19 in China are shown in Section 6. Further discussion is given in Section 7.
The COVID-19 outbreak in Wuhan, China has attracted world-wide attention Huang et al., 2020; Tu et al., 2020) . Publicly available data were collected from provincial and municipal health commissions in China and ministry of health in other countries and areas. The following details were collected on each confirmed case: case ID, In the collected data, 645 chains of transmission were found in the collected data, and n = 198 of them have their dates of symptoms onset available which can be used to calculate serial intervals . These 198 observed serial intervals, {s j , j = 1, . . . , n}, range from −13 to 21 days, with a mean of 4.6 days and quartiles of 1, 4 and 7 days. The same subset of the data used in Qin et al. (2020) is considered in this study for the estimation of the incubation period. This subset includes the confirmed cases who left Wuhan between January 19 and 23, 2020, and excludes cases who developed symptoms before leaving Wuhan.
There is a total of m = 1, 211 cases which meet such criteria in the collected data. These 1,211 observed durations between departure from Wuhan and symptoms onset outside Hubei Province, {t j , j = 1, . . . , m}, ranges from 0 to 22 days with a mean of 5.4 days and quartiles of 2, 5 and 8 days. It is worth noting that Bi et al. (2020) reported that 191 travelers developed symptoms 4.9 days on average after arriving in Shenzhen (Guangdong Province, China).
It is arguable that people who left Wuhan might have higher chance to be infected on the day of departure since it is easier to be exposed to the human-to-human transmitted virus in a crowded environment. Hence in our dataset, there might be two types of individuals: (1) those who got infected during their stay in Wuhan and developed symptoms outside Hubei Province, and (2) those who got infected at the time of leaving Wuhan, for example, at the airport, railway station or on the way from Wuhan to their destinations. Thus, the observed durations between departure from Wuhan and symptoms onset are from a mixture of two distributions: the time between departure from Wuhan and symptoms onset (forward time)
This article is protected by copyright. All rights reserved.
Estimation of Incubation Period and Generation Time 5 and the complete incubation period. Note that the selected cohort is length-biased since the ones with shorter incubation periods who got infected were less likely to be captured as they had higher chance to develop symptoms before leaving Wuhan. The length-biased issue cannot be tested easily in the data but is naturally arised from the data collection process, since only those who developed symptoms after departure from Wuhan can be collected.
In this section, the distribution of incubation is estimated through theory of renewal process and interval censoring with a mixture distribution. Here we have to assume that the distribution of incubation period is the same between the Wuhan residents who had a schedule to leave Wuhan and the general population. Furthermore, given an individual who got infected in Wuhan and developed symptoms outside Wuhan, it is reasonable to assume that the event of departing from Wuhan is independent of the event of infection and manifestation of symptoms. Hence, we can consider the incubation period as a continuous random variable, I, as the sum of forward and backward times, and the duration between departure from
Wuhan and onset of symptoms as the forward time V in renewal process (see Figure 1 as an illustration). Suppose that I and V are continuous and let f I (·) be the probability density function (pdf) of incubation period, and h(·) be the probability density function of forward time. According to Qin (2017) and Qin et al. (2020) , we have
where S(·) is the survival function and E(I) is the expectation of I.
Note that I is not observable in our dataset but V is observable with observations of {t j }, j = 1, . . . , m. From Equation (1), we can see that the forward time V should have a monotonically decreasing density. However, the observed density of {t j } does not seem to be monotone (see Figure 3 ). A possible explanation towards it would be that {t j } are not observations of V only but mixture of V and I. As aforementioned, due to the nature of a human to human infectious disease, it is easier to get infected at the airport/train station or on the flight/train/bus, namely, the infection occurs at the departure. In such case, the duration between departure from Wuhan and onset of symptoms is no longer the forward time, but the complete incubation period. Taking such possibility into account, let π be the (unknown) probability of getting infected at the departure time from Wuhan, and 1−π be the probability of getting infected before departure. Therefore, the duration between departure from Wuhan and symptoms onset follows a mixture distribution with density
where θ is the model parameter in f I (·) and h(·).
[ Figure 1 about here.]
Accounting for the error caused by daily reports, we can simply let t + j = t j + 0.5 and t − j = t j − 0.5. The estimates of θ and π can be estimated by directly maximizing the likelihood function with interval censoring, that is,
where F I and H are the cumulative distribution functions (cdf) of I and V respectively. We denote the maximum likelihood estimate (MLE) of (θ , π) by ( θ , π) = arg sup θ,π (θ, π),
where (θ, π) = log L(θ, π; t 1 , . . . , t m ). In the Web Appendix B, we will provide an alternative interpretation for the likelihood function.
In general, it is difficult to derive asymptotic properties of the estimator for interval censoring cases (see Lehmann and Romano, 2006; Gentleman and Geyer, 1994) . However, the asymptotic properties can be proved under our particular setting, in which we have identical interval lengths for all observations, namely t + j − t − j = 1 for j = 1, . . . , m. Let (t − j , t + j ) for j = 1, . . . , m be independently and identically distributed (iid) observations from the mixture model (2). Define a pseudo-pdf for the mixed model (2) as
Define two likelihood ratio functions
Let (θ 0 , π 0 ) be the true parameter value. For notational simplicity, let g(t; ϕ) denote the density in (4) with ϕ = (θ , π) , that is, g(t; ϕ) = Q p (t; θ, π). In addition, let q θ denote the dimension of θ, ∇ ϕ = ∂/∂ϕ and ∇ ϕϕ = ∂ 2 /(∂ϕ∂ϕ ). The upcoming expectations are taken with respect to the true density g(t; ϕ 0 ) where ϕ 0 = (θ 0 , π 0 ) . To establish the asymptotic result, we make the following regularity condition.
The non-singularity of U in Condition 1(b) excludes the cases where at least one of θ and π is not identifiable. Theorem 1 below shows the asymptotic properties of the estimator ( θ , π)
if the true parameter value is an interior point in the parameter space while Theorem 2
shows the case if π 0 is at the boundary.
Theorem 1: Suppose that g(t; ϕ) and ϕ 0 satisfy Condition 1, and that (θ 0 , π 0 ) is an
Theorem 2: Suppose that g(t; ϕ) and ϕ 0 satisfy Condition 1, and that θ 0 is an interior point in the parameter space of θ and π 0 = 1. As m → ∞,
The proof of Theorem 1 and 2 are given in Web Appendix C. We can easily verify that the interval censored mixture distribution (4) for Gamma, Weibull (except when shape parameter of Gamma or Weibull is 1, that is, the exponential distribution) or log-normal distribution satisfies Condition 1 and thus the above two theorems hold for our estimates.
In this section, we study the estimation of generation time based on serial interval and incubation time under proper assumptions. The estimation of generation time only subjects to symptomatic population. Suppose an infector got infected at calendar time T 0 and showed symptoms at T 1 . This infector infected an infetee at calendar time T 2 , and the infectee showed symptoms at T 3 . Let G = T 2 − T 0 denote the generation time, S = T 3 − T 1 denote the serial interval, I 1 = T 1 − T 0 and I 2 = T 3 − T 2 be the incubation period of infector and infectee respectively. It is straightforward to see that G = S + I 1 − I 2 .
If a disease is non-infectious during the incubation period (for example, SARS; Lipsitch et al., 2003) , then we can naturally assume I 1 ⊥ ⊥ S and I 2 ⊥ ⊥ G. Then it follows that
where f G and f S are the pdfs of G and S respectively, and the generation time can be estimated by serial interval without inducing bias. However, such case does not apply for COVID-19 as there were reported asymptomatic infections (Rothe et al., 2020) . Instead, we assume I 1 ⊥ ⊥ G, I 2 ⊥ ⊥ G, I 1 ⊥ ⊥ I 2 . The first part of states that the incubation period of the primary case is independent of its generation time. This is true if the disease is infectious during incubation period, and in addition, the ability to pass the pathogens to susceptible host is independent of whether the symptoms are being developed. The rest is straightforward due to the standard assumption of independence between individuals. In addition, we assume the distributions of incubation period, generation time and serial interval are homogeneous among all individuals. Furthermore, to ensure the observed serial intervals could reflect the serial interval of general population, we assume that the missingness (failure of establishing contact tracing) was independent of the length of serial interval. Hence, we obtain that
where the symbol * represents convolution, f G . f S , f I and f −I are the pdfs of G, S, I and −I respectively. Thus, f G is identifiable through characteristic function (or Fourier
where i = √ −1, φ I (t) can be approximated through the estimated distribution of I introduced in previous section, and φ S (t) can be estimated by the observed serial intervals, {s 1 , . . . , s n }, along with a proper kernel K(·), that is,
where h n is the bandwidth. Note that G must be positive, so to account for the boundary bias, Karunamuni (2009) proposed to use boundary kernel K c (t; y) = a 0 (y)
at the point y > 0. Denote the Fourier transformation φ Kc (t) = ∞ −∞ e itu K c (u)du. Hence, a
consistent estimator for f G is defined as
where M n → ∞, h n → 0 as n → ∞, and Re is the operator taking the real part of a complex value. This estimator is consistent at any interior point in the support of G, provided that the model for incubation period I is correctly specified (Liu and Taylor, 1989) . It is equivalent to specifying a kernel density or a kernel chf, and possible choices are the Vallée Poussin (Fejér) kernels or Cesàro kernels (Devroye, 1989; Anastassiou, 2000) . Note that the generation time must be positive. To correct the bias for devonvolution at the boundary G = 0, a second order correction to remove the boundary effect was proposed by Karunamuni (2000) and
Karunamuni (2009). The density function f G can also be obtained by imposing a parametric model on generation time and fit the density for serial interval, which relies heavily on model specification. More details about the conditions and properties of deconvolution is shown in Web Appendix D.
In this numerical study, we assess the performances of our proposed method and the following methods in estimation of incubation period:
1. The renewal process based mixture model in Qin et al. (2020) which is denoted as Qin's Method hereafter, note that the original method in Qin et al. (2020) is not suitable to be applied in our simulation as the mixture proportion π was prefixed, hence we alter their method by estimating π simultaneously, as a result the Qin's method here is actually an improved version of the method in Qin et al. (2020) and π over 0 and 0.2. Each setting is repeated for 1,000 times. Table 1 summarizes the estimates of parameters in incubation distribution using the Qin's method, Interval Censoring method and our proposed method. We can see that when π = 0, our proposed method and Qin's method provide similar results. For π = 0.2, our approach has smaller bias in Weibull setting. Due to the fact that the log-likelihood is too flat near the maximum, the estimates may be a little biased in finite sample. With larger sample size, the bias is getting smaller. The IC method does not perform well in our simulation as it does not take the length-biased sampling issue and the cross infection probability π into consideration.
[ For generation time estimation, we assume generation time and incubation period both follow Gamma distributions. The mean and variance of these two periods are listed in Figure 2 . We generate 200 serial intervals. Note that it is possible that some serial intervals are negative. We choose the kernel chf φ K (t) = (1 − t 2 ) 3 + , and according to Karunamuni (2009),
The results are displayed in Figure 2 , note that the figure appears in color in the electronic version of this article, and any mention of color refers to that version. The cyan line is the fitted Gamma density using observed positive serial interval data. The red line is the estimated generation time density by deconvolution. We can see that the estimated density of generation time by deconvolution is more close to the true density than fitting the serial intervals, although the deconvolution estimate may be negative in some area.
[ Figure 2 about here.]
In this section we analyze the real data of COVID-19 outbreak, originated from Wuhan, China. As described in Section 2, the times between departure from Wuhan and symptoms onset were collected for the 1,211 cases who got infected in Wuhan and developed symptoms outside Hubei Province, see Figure 3 for the histogram of the collected observations. Table 2 summarizes the estimates of model parameters as defined in Section 3 and quantiles in the incubation distribution with their 95% confidence intervals (CI) by nonparameteric bootstrap. The last two columns list the log-likelihood (loglik) and goodness-of-fit χ 2 statistic (GoF) of each parametric distribution of the incubation period, with higher loglik and lower GoF means a better fit of the model. The number in the bracket of GoF is the p-value of goodness-of-fit test, and all these three models have a good fit. More details about the goodness of fit test is in Web Appendix E. Likelihood ratio test about π can be conducted based on the mixture distribution of half 0 and half chi-squared distribution with 1 degree of freedom to infer the magnitude of π (Self and Liang, 1987; Susko, 2013) . At significant level 0.05, the critical value is 2.71. Although the point estimate of π is zero, the log-likelihood is flat in the region π ∈ [0, 0.2] which results in a situation where a null hypothesis such as H 0 : π > 0.1 or H 0 : π < 0.1 cannot be reject at significant level 0.05, since 2[max θ (θ, 0) − max θ (θ, 0.1)] < 2.71 (illustrated in Figure 3 ). Our model estimated that about 1% of patients have incubation periods longer than 21 days. This might influence the length of quarantine period in regions with a severe epidemic.
[ [ Figure 3 about here.] Figure 3 plots the twice of log likelihood ratio, 2[max θ,π (θ, π) − max θ (θ, π)], versus π.
The dashed line is at 2.71, the 90% quantile of chi-squared distribution with 1 degree of freedom. In fact, the horizontal ordinate of the crossover point is the 95% upper bound of π by likelihood ratio, since 0.5 + 0.5χ 2 (2.71, 1) = 0.95 (mixed chi-squared distribution), where χ 2 (·, 1) is the cdf of chi-squared distribution with 1 degree of freedom.
From the last two columns in Table 2 we can see that Gamma distribution slightly outperforms among three distributions, having the smallest goodness-of-fit statistic. The corresponding incubation period has an estimated mean of 9.10 days and median of 8.50
days, and possess a heavy tail. About 10% infected individuals would develop symptoms after 14.57 days and 1% after 21.17 days. Although the confidence interval of π is relatively wide, variation of the results on the quantiles of incubation period is not significant as shown in Table 2 . Figure 3 visualizes the estimate on the histogram of the time between leaving Wuhan and symptoms onset.
For the estimation of the distribution of generation time, we choose the kernel chf φ K (t) =
(1 − t 2 ) 3 + in (9) with bandwidth h = 2. The estimated probability density of generation time based on the estimated Gamma incubation period is displayed in Figure 4 . We can see that the distributions of generation time has much smaller variance than the serial interval.
[
The point estimate of the basic reproduction number is 2.96 with 95% confidence interval [2.15; 3.86] . Note that the estimate of R 0 is 2.18 by using serial interval data instead of generation time, which severely underestimates the infectiousness ability of the disease.
In this paper, we proposed an estimation for incubation distribution which only requires information on travel histories and dates of symptoms onset. Unlike the approach in Kuk and Ma (2005), our estimation of incubation period is feasible regardless that the disease is infectious or not during the incubation period. It enhances the estimation by increasing available sample size and utilizing censored information. We also took mixture distribution of forward time and complete incubation period and the interval censoring caused by daily reports into consideration, hence the result should be more robust than that in Qin et al. (2020) .
According to the theory of renewal process, the density of forward time should be a decreasing function as it is proportional to the survival function of incubation period. If the denisty of the observed time between departure from Wuhan and symptoms onset is unimodal, it might because of (1) the fact that the observations come from a mixture of forward time and full incubation period; (2) the discretized time. Hence, an estimation using mixture distribution together with the censored intervals is recommended if the observed density is not monotonically decreasing. Mixture distribution is robust in incubation analysis in that the potential problem due to the existence of short-term tourists can be addressed by introducing π into the model. In addition, fewer observations of zeros than ones is still reasonable even if there is no full incubation period mixed in the cohort (when π = 0), as the probability to be captured in our cohort is reduced by half if the 'scheduled' departure from Wuhan and symptoms onset occur on the same day, which can be well reflected in the interval censoring situation since F I (0 + ; θ) − F I (0 − ; θ) is just equal to F I (0 + ; θ).
Compared with the estimated incubation period in Li et al. (2020), Backer et al. (2020) and Linton et al. (2020) , our estimation yields a longer estimate of incubation period. This is possibly because we avoided the selection bias by considering a longer follow-up period after departure from Wuhan and successfully recruiting the cases with long incubation periods. However, a limitation here is raised by the possible violation of assumption that the individuals included in the study were either infected in Wuhan or on the way to their
This article is protected by copyright. All rights reserved. In the previous studies of the basic reproduction number of COVID-19, and Figure 1 . Illustration of complete incubation period and forward time. Red circle: getting infected; Blue column: departure from Wuhan; Red cross: symptoms onset. The shaded area is the period during which our cohort sample departed from Wuhan. This figure shows 5 kinds of individuals. Only those who departed from Wuhan in the shaded area were collected in our cohort. A: symptoms onset in Wuhan, not in our cohort; B and C: captured in our cohort with infection before departure; D: captured in our cohort with infection at departure; E: infection outside Wuhan, not in our cohort. This figure appears in color in the electronic version of this article, and any mention of color refers to that version. Biometrics, 000 0000 . COVID-19 data analysis result. Upper: Twice of log likelihood ratio, 2[max θ,π (θ, π) − max θ (θ, π)], versus π. The dashed line is at 2.71, the 90% quantile of chisquared distribution with 1 degree of freedom. In fact, the horizontal ordinate of the crossover point is the 95% upper bound of π by likelihood ratio, since 0.5+0.5χ 2 (2.71, 1) = 0.95 (mixed chi-squared distribution), where χ 2 (·, 1) is the cdf of chi-squared distribution with 1 degree of freedom. Lower: Incubation estimation. Red line: forward time fit; Blue line: Incubation period fit; Black line: mixed observed time fit (covered by the red line). This figure appears in color in the electronic version of this article, and any mention of color refers to that version. Biometrics, 000 0000 Table 1 Estimation of incubation distribution in simulation. Estimates and standard error. The first panel is our proposed method: mixture distribution with censoring. The second panel is the Qin's method. The third panel is the IC method.
(a) Gamma incubation f I (t; θ) = β α t α−1 e −βt /Γ(α); α = 5, β = 0.8.
Qin
|
Small RNA sequencing (small RNA-seq or sRNA-seq) is used to acquire thousands of short RNA 58 sequences with lengths of usually less than 50 bp. With sRNA-seq, many novel non-coding RNAs (ncRNAs) 59 have been discovered. For example, two featured series of rRNA-derived RNA fragments (rRFs) constitute a 60 novel class of small RNAs [1] . Small RNA-seq has also been used for virus detection in plants [2] [3] [4] and 61
invertebrates [5] . In 2016, Wang et al. first used sRNA-seq data from the NCBI SRA database to prove that 62 sRNA-seq can be used to detect and identify human viruses [6] , but the detection results were not as good as 63 those of plant or invertebrate viruses. To improve virus detection in mammals, our strategy was to detect and 64 compare featured RNA fragments in plants, invertebrates and mammals using sRNA-seq data. In one 65 previous study [7] , we detected siRNA duplexes induced by plant viruses and analyzed these siRNA 66 duplexes as an important class of featured RNA fragments. In this study, we detected siRNA duplexes 67 induced by invertebrate and mammal viruses a n d unexpectedly discovered another important class of 68 featured RNA fragments, which were complemented palindrome small RNAs (cpsRNAs). Among all the 69 detected cpsRNAs, we found a typical 22-nt cpsRNA UCUUUAACAAGCUUGUUAAAGA from SARS 70 coronavirus (SARS-CoV) strain MA15, which deserved further studies because mice infected with SARS-71
CoV MA15 died from an overwhelming viral infection with virally mediated destruction of pneumocytes 72 and ciliated epithelial cells [8] . Although the palindromic motif TCTTTAACAAGCTTGTTAAAGA was 73 already observed in a previous study [9] , it never be considered to be transcribed as cpsRNAs before our 74 studies. 75
The first discovered cpsRNA named SARS-CoV-cpsR-22 contained 22 nucleotides perfectly 76 matching its reverse complementary sequence. In our previous study of mitochondrial genomes, we had 77 reported for the first time a 20-nt palindrome small RNA (psRNA) named hsa-tiR-MDL1-20 [10]. The 78 biological functions of hsa-tiR-MDL1-20 had been preliminarily studied in our previous study, while the 79 biological functions of SARS-CoV-cpsR-22 were still unknown. In this study, we compared the features of 80 siRNA duplexes induced by mammal viruses with those induced by plant and invertebrate viruses and found 81 that siRNA duplexes induced by mammal viruses had significantly lower percentages of total sequenced 82 reads and it seemed that they were only produced from a few sites on the virus genomes. One possible 83 reason could be a large proportion of sRNA-seq data is from other small RNA fragments caused by the 84 presence of a number of dsRNA-triggered nonspecific responses such as the type I interferon (IFN) 85 synthesis [11] . Another possible reason could be the missing siRNA duplexes or siRNA fragments functions 86 in cells by interaction with host RNAs or proteins. Based on this idea, we suspected that SARS-CoV-cpsR-87 22 could play a role in SARS-CoV infection or pathogenicity. Then, we performed RNAi experiments to test 88 the cellular effects induced by SARS-CoV-cpsR-22 and its segments. 89
In this study, we reported for the first time the existence of cpsRNAs. Further sequence analysis 90 supported that SARS-CoV-cpsR-22 could originate from bat betacoronaviruses. The results of RNAi 91 experiments showed that one 19-nt segment of SARS-CoV-cpsR-22 significantly induced cell apoptosis. 92
This study aims to provide useful information for a better understanding of psRNAs and cpsRNAs, which 93 constitute a novel class of small RNAs. The discovery of psRNAs and cpsRNAs paved the way to find new 94 markers for pathogen detection and reveal the mechanisms in the infection or pathogenicity from a different 95 point of view. 96 97
In this study, 11 invertebrate viruses were detected using 51 runs of sRNA-seq data (Supplementary 100 file 1) and two mammal viruses (H1N1 and SARS-CoV) were detected using 12 runs of sRNA-seq data. In 101 our previous study, six mammal viruses were detected using 36 runs of sRNA-seq data [6] . The detection of 102 siRNA-duplexes by 11 invertebrate and eight mammal viruses was performed using a published program in 103 our previous study [7] . Then, we compared the features of siRNA duplexes induced by invertebrate viruses 104 ( Figure 1A ) with those induced by plant viruses (Figure 1B) . The results showed that the duplex length was 105 the principal factor to determine the read count in both plants and invertebrates. 21-nt siRNA duplexes were 106 the most abundant duplexes in both plants and invertebrates, followed by 22-nt siRNA duplexes in plants 107 but 20-nt siRNA duplexes in invertebrates. 21-nt siRNA duplexes with 2-nt overhangs were the most 108 abundant 21-nt duplexes in plants, while 21-nt siRNA duplexes with 1-nt overhangs were the most abundant 109 21-nt duplexes in invertebrates but they had a very close read count to that of 21-nt siRNA duplexes with 2-110 nt overhangs. 18-nt, 19-nt, 20-nt and 22-nt siRNA duplexes in invertebrates had much higher percentages of 111 total sequenced reads that those in plants. In addition, 18-nt and 19-nt siRNA duplexes had very close read 112 counts and 20-nt and 22-nt siRNA duplexes had very close read counts in invertebrates. Since siRNA 113 duplexes induced by mammal viruses had significantly lower percentages of total sequenced reads, the 114 comparison of siRNA-duplex features between mammals and invertebrates/or plants could not provide 115 meaningful results using the existing public data with standard sequencing depth. However, as an 116 unexpected result from siRNA-duplex analysis, we discovered cpsRNAs from invertebrate and mammal 117 viruses. 118
One typical cpsRNA UCUUUAACAAGCUUGUUAAAGA (DQ497008: 25962-25983) located in the 119 orf3b gene on the SARS-CoV strain MA15 genome was detected in four runs of sRNA-seq data (SRA: 120 SRR452404, SRR452406, SRR452408 and SRR452410). This cpsRNA was named SARS-CoV-cpsR-22, 121 which contained 22 nucleotides perfectly matching its reverse complementary sequence (Figure 2A) . We 122 also detected one 18-nt and one 19-nt segment of SARS-CoV-cpsR-22, which could also be derived from 123 siRNA duplexes ( Figure 2B ) but their strands (positive or negative) could not be determined. Among 124 SARS-CoV-cpsR-22 and its two segments, the 19-nt segment was the most abundant and the 22-nt SARS--5 -127 Discovery of psRNAs and cpsRNAs 128 Palindromic motifs are found in published genomes of most species and play important roles in 129 biological processes. The well-known samples of palindromic DNA motifs include restriction enzyme sites, 130 methylation sites and palindromic motifs in T cell receptors [12] . In this study, we found that palindromic or 131 complemented palindrome small RNAs motifs existed ubiquitously in animal virus genomes, but not all of 132 them were detected to be transcribed and processed into cpsRNAs probably due to inadequate sequencing 133 depth of the sRNA-seq data. For example, we only found two psRNAs (
All sRNA-seq data were downloaded from the NCBI SRA database. The invertebrate and mammal 201 viruses were detected from sRNA-seq data using VirusDetect [4] and their genome sequences were 202 downloaed from the NCBI GenBank database. The description of sRNA-seq data and virus genomes is 203 presented in Supplementary file 1. The cleaning and quality control of sRNA-seq data were performed 204 using the pipeline Fastq_clean [15] that was optimized to clean the raw reads from Illumina platforms. 205
Using the software bowtie v0.12.7 with one mismatch, we aligned all the cleaned sRNA-seq reads to viral 206 genome sequences and obtained alignment results in SAM format for detection of siRNA duplexes using the 207 program duplexfinder [7] . Statistical computation and plotting were performed using the software R v2.15.3 208 with the Bioconductor packages [16] . The orf3b gene from human betacoronavirus (GenBank: 209 DQ497008.1), its 20 homologous sequences from bat betacoronavirus and nine homologous sequences from 210 civet betacoronavirus were aligned using ClustalW2 (Supplementary file 2) . After removal of identical 211 sequences, the orf3b gene from human betacoronavirus, eight homologous sequences from bat 212 betacoronavirus and two homologous sequences from civet betacoronavirus were used for phylogenetic 213 analysis. Since these homologous sequences had high identities (from 85.16% to 99.78%) to the orf3b gene 214 from DQ497008, the Neighbor Joining (NJ) method was used for phylogenetic analysis. 215 216
Based on the shRNA design protocol [1], the sequences of 20-nt segments ( Figure 2B ) and their control "CGTACGCGGAATACTTCGA" were selected to use as 219 target sequences for pSIREN-RetroQ vector construction (Clontech, USA), respectively. PC-9 cells were 220 divided into six groups named 22, 16, 18, 19, 20 and control for transfection using plasmids containing 221 20 -nt segments and control sequences. Each group had three 222 replicate samples for plasmid transfection and cell apoptosis measurement. Each sample was processed 223 following the same procedure described below. At 12 h prior to transfection, the PC-9 cells were washed 224 with PBS and trypsinized. Gbico RPMI-1640 medium was added into the cells, which were then centrifuged 225 at 1000 rpm for 10 min at 4°C to remove the supernatant. Gbico RPMI-1640 medium (Thermo Fisher 226 Scientific, USA) containing 10% fetal bovine serum was added to adjust the solvent to reach a volume of 2 227 μ L and contain 2 ╳ 10 5 cells. These cells were seeded in one well of a 6-well plate for plasmid transfection.
All the cleaned sRNA-seq reads were aligned to viral genome sequences using the software bowtie v0.12.7 310 with one mismatch. The detection of siRNA duplexes was performed using the program duplexfinder [7] . A. 311
The read count of siRNA duplexes varies with the duplex length and the overhang length, using data from 312 11 invertebrate viral genomes. B. The read count of siRNA duplexes varies with the duplex length and the 313 overhang length, using data from seven plant viral genomes [7] . 314 315 316 Figure 2 . Clues to origins of SARS-CoV-cpsR-22 317
|
T he Ebola outbreak that has devastated West Africa this past year may be receding, but it is far from over. Clinical trials of experimental antiviral agents, antibody preparations, and vaccines have begun, but even if these agents are effective, supplies will be limited and all of them will be costly (1) . By themselves, they will not affect the course of the current outbreak or have much impact on its overall mortality. To improve patient survival, a different approach to treatment will be needed.
Reports of the care given to Ebola virus-infected health care workers who were evacuated to Germany and the United States have been invaluable. They document severe internal (third spacing) and external (vomiting, diarrhea) fluid losses and electrolyte disturbances (2) (3) (4) . These findings reflect the profound endothelial dysfunction and vascular barrier breakdown that are the central features of human Ebola virus disease. Left untreated, these changes usually lead to profound hypovolemia, multiorgan failure, and death (5) . Fortunately, these health care workers received meticulous care and all survived.
Animal models of Ebola virus infection, including those in nonhuman primates (6) , have not duplicated the fluid and electrolyte disturbances seen in human Ebola virus disease. Noninfectious Ebola virus glycoproteins (GPs) are shed from infected cells (7) and activate myeloid and endothelial cells via a TLR4mediated mechanism. This leads to endothelial dysfunction and increased vascular permeability. A recent study in Collaborative Cross mice has demonstrated the importance of endothelial dysfunction and increased vascular permeability in causing lethal Ebola virus infection (8) .
Sepsis is another condition that, as in Ebola virus disease, is characterized by endothelial dysfunction, multiorgan failure, and high mortality (5) . Several lines of experimental evidence suggest that maintaining or restoring endothelial barrier integrity can improve survival (9) . For example, one study was conducted with transgenic mice engineered to overexpress IB␣ in endothelial cells alone (10) . Overexpression of IB␣ blocks the activation of NF-B, which, when allowed to activate, translocates to the nucleus and leads to the release of proinflammatory cytokines and chemokines. When these mice were subjected to Escherichia coli sepsis, selective blockade of endothelial NF-B activation via overexpression of IB␣ had no effect on the appearance of sys-temic cytokines and chemokines, but it prevented the development of endothelial dysfunction and multiorgan failure and improved survival (10) . This and other studies suggest that treatments targeting the endothelial response to sepsis might improve survival. The same might be true for Ebola virus disease.
In vitro studies have shown that statins (11, 12) and angiotensin receptor blockers (ARBs) (13) preserve or restore endothelial barrier integrity. In older adults hospitalized with communityacquired pneumonia (a disease also characterized by endothelial dysfunction), an observational study suggested that inpatient treatment with statins and ARBs reduced 30-day all-cause mortality by 32% and 53%, respectively (14) . (For most of these patients, outpatient treatment was continued after hospital admission.) However, in patients with sepsis-related acute respiratory distress syndrome requiring intensive-care unit (ICU) admission and mechanical ventilation, randomized controlled trials of statin treatment have shown no improvement in survival (15) . In these patients, statin treatment was probably "too little, too late." To be effective, statins probably have to be started earlier, as suggested by the findings of a randomized controlled trial of 100 patients hospitalized with early sepsis (16) . At the time of enrollment, none of the patients had evidence of organ failure and all had been statin naive for at least 2 weeks. As soon as they were hospitalized, they were treated with either atorvastatin (40 mg/day) or a placebo. The trial showed that atorvastatin reduced the occurrence of multiorgan failure by 83%, a result that was likely due to stabilization of endothelial function.
Cardiologists have known for more than a decade that when statins and ARBs are given in combination to patients with cardiovascular disease, they have additive or synergistic activities in counteracting endothelial dysfunction (17, 18) . Both drugs can be administered orally once a day, and they have been shown to be safe when given to thousands of patients with acute critical illness. A full discussion of the mechanisms by which statins and ARBs preserve or restore endothelial barrier integrity is beyond the scope of this article. Nonetheless, the studies cited above suggest that the increased vascular permeability and the fluid and electrolyte abnormalities seen in Ebola patients might improve after treatment with these agents (19) . Because they have direct effects on the response of endothelial cells to Ebola virus infection, they might improve survival.
Most Ebola scientists seek to develop new treatments for Ebola virus disease that directly target the virus (1). They first test potential treatments in animal models and then evaluate promising agents in clinical trials. They are hesitant about treatments that target the host response because these treatments are based on extrapolations from findings obtained from patients with other conditions (20) . They are uncertain about the safety of using agents that they believe might increase virus replication. They worry about disillusionment if treatment is found to be ineffective.
The results of the first clinical study of an antiviral treatment for Ebola virus disease were recently reported. A proof-of-concept phase 2 study of favipiravir was conducted in two Ebola treatment units in Guinea (21) . Favipiravir is a nucleoside polymerase inhibitor that in vitro and animal studies have shown to have antiviral activity against Ebola viruses. In 69 PCR-positive adults and adolescents of Ն14 years of age, a loading dose of favipiravir (6,000 mg orally) was given in divided doses on day 1, and then 1,200 mg was given twice a day for the next 9 days. Ebola virus loads were determined by cycle threshold (C T ) values, with C T values of Ͻ20 and Ն20 indicating high and low virus loads, respectively. Mortality rates after 14 days in consecutively treated patients were compared with those in untreated patients seen in the two units during the previous 3 months. Among favipiravirtreated patients with low virus loads, mortality was 15%, compared with 30% in historical controls. However, among treated patients and historical controls with high virus loads (approximately 40% of all treated patients), mortality rates in both groups were 85%. Although this study is not definitive, it suggests that antiviral treatment of Ebola patients might yield only modest improvements (Ͻ20%) in overall survival.
There are no data on the efficacy of combination treatment with statins and ARBs in animal models of Ebola virus disease, including nonhuman primates. Although statin effects on Ebola virus replication are unknown, statins have been shown to reduce the replication of at least five other RNA viruses (D. S. Fedson, unpublished observations); there are no data on the effect of ARBs on the replication of any virus. Moreover, these agents are produced as inexpensive generics in developing countries and are available in West Africa (19) . If given in combination, a 10-day course of treatment for an individual Ebola patient would cost only a few dollars. There is no guarantee that such treatment would convincingly reduce Ebola mortality or forestall complications in those given preventive treatment, and failure to show efficacy would be disappointing. However, given their promise and known safety, it is reasonable to think that they might improve survival in Ebola patients.
It must be emphasized that treating the host response would not prevent or cure Ebola virus infection itself, but it might allow individual patients to survive long enough to develop an immune response that eliminates the virus. These agents could be used in combination with antivirals if they are available.
In September 2014, one of the authors of this report (O.M.R.) facilitated the delivery of a supply of atorvastatin and irbesartan and arranged to have the drugs delivered to officials in Sierra Leone. In an arrangement negotiated with several governmental ministers and staff of the Office of National Security, it was agreed that this agency would conduct initial trials in police and military hospitals in Freetown, Sierra Leone. The drugs were not to be used without the approval of an agency such as the Pharmacy Board of Sierra Leone. The agreement did not stipulate that signed informed consent be obtained from each patient because it was assumed that physicians and the government would be acting in the best interests of their patients. Instructions that accompanied the donation stipulated that records of treatment should be kept and reviewed on a continuous basis to determine whether treatment was safe or might increase mortality rates. It also stipulated that all results were to be made public, regardless of outcome. In addition, two of the coauthors (D.S.F. and S.M.O.) wrote detailed letters to the Pharmacy Board in November outlining the rationale for treatment and providing guidance on the use of the drugs.
The circumstances for testing these drugs were not ideal; for example, there was no financial or logistical support for proper clinical trials. Nonetheless, local physicians were able to treat consecutively approximately 100 patients with laboratory-confirmed Ebola virus disease at the 34 Military Hospital in Freetown, the Port Loko Government Hospital, the Hastings Ebola Treatment Centre, and other sites in Sierra Leone. Patients were given atorvastatin (40 mg/day) and irbesartan (150 mg/day). Reports indicate that rapid clinical improvement was seen in almost all patients, and only two who were inadequately treated are known to have died (O.M.R., unpublished observations). One was critically ill when first seen and died soon thereafter. The other initially responded to 3 days of combination treatment, but when treatment was stopped and he was given an antiviral agent, he relapsed and died.
Unfortunately, supervising physicians and health officials in Sierra Leone have not released reports of the treatment results, although they exchanged letters and memoranda describing their experience, with one letter noting "remarkable improvement" on treatment (O.M.R., unpublished observation). It will be up to others to rigorously review and validate these findings. (22) . Perhaps atorvastatin and irbesartan treatment of some of these patients helps explain the decrease in the Ebola case fatality rate observed at the Hastings Center.
Given the highly encouraging but poorly documented results of atorvastatin and irbesartan treatment in Sierra Leone, it is important to decide what should be done next. At least four things should be considered.
Undertake research on the host response to Ebola virus infection and its treatment. Current international programs for improving the care of Ebola patients are focused on the development and testing of experimental treatments that target the Ebola virus (1). These programs are generously supported by government, foundation, and corporate grants and contracts. In the United States alone, for the years 2003 to 2013, NIAID-sponsored research and development for medical countermeasures targeting Ebola virus disease totaled $333 million (23) . For 2015, Congress appropriated an additional $238 million for NIAID-sponsored research and development for Ebola countermeasures. An additional $870 million was appropriated for other agencies (FDA, DOD, BARDA), for a total of $1.1 billion. In contrast, studies of treatments that target the host response to Ebola virus disease have received no such support.
This imbalance in Ebola research and development should change. Funding agencies should redirect some of their resources to studies of the host response to Ebola virus disease. This work should involve scientists outside the Ebola community, especially those who understand endothelial cell biology and how it is affected in other diseases, such as sepsis, pneumonia, and influenza. They should determine whether inexpensive generic drugs that modify endothelial cell function might be used to treat Ebola patients.
Perform clinical studies in West Africa. Health care workers in West Africa should undertake pragmatic clinical trials to test combination treatment with statins and ARBs in Ebola patients and their contacts. The candidates who should be considered for treatment with statins and angiotensin receptor blockers are Ebola patients cared for in hospitals and other treatment units that are staffed by trained health care workers, Ebola patients treated at home, health care workers (to prevent severe illness in those who are at high risk and might become infected while caring for Ebola patients), family caregivers and other close contacts, and community surveillance and burial workers. The primary goal of treatment is to reduce patient mortality. Secondary goals are to reduce requirements for fluid and electrolyte replacement and prevent severe disease in health care workers and other contacts who become infected. Treating all patients consecutively would be the most straightforward approach, and results could be compared with those seen in historical controls. Clinical oncologists often use the same approach when they evaluate new treatments, and it can be highly efficient. For example, if the goal is to achieve a statistically significant (95% confidence interval) reduction in case fatality rates from 50% to 25%, only 52 patients would have to be treated. One problem with this method is uncertainty about mortality rates in historical controls in settings where better supportive care has already improved survival. This concern has already led to uncertainty about the results of the favipiravir study in Guinea (21) . If clinical record keeping in the centers in Sierra Leone where Ebola patients were treated had been adequate, a matched-set case-control study might be used to retrospectively evaluate the effectiveness of atorvastatin and irbesartan treatment.
A randomized controlled trial might also be undertaken, although with 1:1 randomization, a statistically significant reduction in mortality from 50% to 25% would require 210 patients, and only half would receive active treatment. Given the known high mortality rate of untreated patients, investigators might instead choose an adaptive trial design. This would minimize the number of untreated placebo subjects and allow different treatment regimens to be tested simultaneously for efficacy and safety.
Consider the implications of successful treatment of the host response for clinical trials of other interventions that target the Ebola virus. If convincing evidence that statin and ARB treatment reduces Ebola mortality is forthcoming, this might have major implications for clinical trials of all interventions that target the Ebola virus. Treating the host response with these agents would become a new standard of care in clinical trials; both control and intervention subjects would have to be given statins and ARBs. This would necessarily increase sample size requirements and might make it difficult to conduct a successful trial.
Recognize the implications of treating the host response for other diseases. If combination treatment with statins and ARBs is convincingly shown to reduce Ebola mortality, it will suggest that these agents might be used in the syndromic treatment of other forms of acute severe illness, much like oral rehydration solution is used to treat the host response to severe diarrheal illness, regardless of cause (24) . The combination of statins and ARBs might eventually be used to treat other filovirus infections. These agents might find a place in treating dengue hemorrhagic fever, hantavirus infections, and severe acute respiratory syndrome/Middle East respiratory syndrome coronavirus infections. Observational studies have already suggested that statins and ARBs may reduce mortality in patients with community-acquired pneumonia (14) and that statins may reduce mortality in patients with influenza (24) . Combination treatment might be especially useful for a global response to an avian influenza pandemic. It might also find use in treating nonviral diseases, such as pneumococcal sepsis and severe malaria, and it might provide an effective medical countermeasure against potential agents of bioterrorism, such as those for smallpox, plague, and anthrax (24) . All of these possibilities deserve study.
The Ebola crisis in West Africa has highlighted the contributions that new technologies are making to the identification of emerging viruses, the rapid diagnosis of disease, and the development of new vaccines and antiviral agents (25) . These technologies are being used to develop several treatments that target the Ebola virus, but even if they are effective, they will be expensive and in short supply. As an alternative, inexpensive generic agents that counteract endothelial dysfunction could be used to treat Ebola patients. Reports from Sierra Leone suggest that combination treatment with atorvastatin and irbesartan reduced mortality. These agents are safe and inexpensive, and if the results of treatment can be validated, their use would transform the way Ebola virus disease is managed. These agents might also find use in the syndromic treatment of other severe infectious diseases.
|
Age (>70 years), case fatality rate (CFR,10.2%) and coexisting conditions, particularly cardiovascular disease (CFR,10.5%) and hypertension (CFR,6.0%), are independent predictors of adverse outcome for 45,000 COVID-19 patients in China [1] . A consensus has emerged that SARS-CoV-2 uses the same 'receptor' as SARS-CoV, the angiotensin converting enzyme 2 (ACE2), for initial binding to the host cell.
This must be co-expressed with the serine protease TMPRSS2, that primes the spike protein S1 for endocytosis-mediated internalization of virus, employing the S2 domain for fusion to the host membrane ( Figure [1A] ) [2] [3] [4] . A key difference in SARS-CoV-2 is a spike protein second site (S2'), proposed to be cleaved by the proteinase furin 2 . Once inside the cell cysteine proteases, cathepsin L and B, are thought critical for endosomal processing in certain cells [3] [4] .
In the cardiovascular system, ACE2 protein [5] and mRNA [6] are present in cardiomyocytes. ACE2 normally functions as a carboxypeptidase cleaving single Cterminal amino acids thus hydrolysing Pro-Phe in Ang-II to Ang 1-7 , [Pyr 1 ]-apelin-13 to [Pyr 1 ]-apelin (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) and des-Arg 9 -bradykinin to inactive bradykinin (1) (2) (3) (4) (5) (6) (7) (8) [4] . Internalization of ACE2 by virus potentially reduces the beneficial counter regulatory function of these peptide products to the RAAS pathway [2, 4] . Conversely, the serine protease ADAM17 cleaves ACE2 to release its ectodomain and this is stimulated by Ang-II and, potentially, apelin acting via their respective G-protein coupled receptors [7] .
Shed ACE2 binds SARS-CoV-2, a complex predicted not to internalize, and therefore circulating ACE2 could be exploited as a beneficial viral decoy substrate.
Intriguingly, ACE2 is highly expressed in the GI tract where it is associated with B 0 AT1 (SLC6A19) that actively transports most neutral amino acids across the apical membrane of epithelial cells [4, 8] . It is not yet known if B 0 AT1 and ACE2 are coexpressed in cardiomyocytes and represent an important mechanism of viral entry, but they can form a heterodimer, with the ACE2 capable of binding the spike protein S1 [8] . Interleukin 6 (IL-6), normally transiently produced, is elevated in serum and positively correlated with disease severity in COVID-19 patients [9] . We hypothesised that differential expression of genes encoding proteins in these See recent review [4] for a comprehensive list of references supporting the concepts outlined in this letter.
[A] Schematic diagram of the key proteins predicted from RNASeq data to be expressed by human cardiomyocytes. We propose SARS-CoV-2 binds initially to ACE2 (with the ACE2/B 0 AT1 complex as a potential second entry site). TMPRSS2
priming of the spike protein S1, together with further protease activation by cathepsins B and L, facilitates viral cell entry and internalization by endocytosis.
Furin may also have a role in this process. Internalization of the virus with ACE2 inhibits ACE2 carboxypeptidase activity that normally hydrolyses Ang-II, apelin and des-Arg 9 -bradykinin. ADAM17, present on the cell surface, cleaves ACE2 to a soluble form that circulates in the plasma and could act as a decoy substrate for the virus. Levels of ADAM17 may be regulated by Ang-II and apelin acting via their respective G protein-coupled receptors [4] . ↑ -indicates genes with increased expression in aged cardiomyocytes vs young.
[B] Log 2 fold change for expressed genes encoding SARS-CoV-2 entry proteins, peptide receptors in ACE/ACE2 pathway and IL-6 and interferon receptors in aged cardiomyocytes. A Chi Squared analysis for the selected gene panel shown in [B] showed significant enrichment for differentially expressed genes compared with the RNASeq data set as an entirety (P < .05, 95 % confidence level).
|
Waste management is a major challenge for cities in developing countries. It was calculated that approximately 1.3 billion tons of solid waste were generated in cities around the world in 2012. A predictive figure could increase to 2.2 billion tons in 2025 [1] . There is large-scale participation of workers in waste collection in low and middle-income countries [2] . Waste management consists of a wide range of activities including collecting garbage, collecting and sorting recyclable materials and collecting and processing of commercial and industrial waste [3] . The physical activities of waste collectors are heavy and repeated such as lifting, carrying, pulling, and pushing [4] . Therefore, waste collectors copes with occupational risks. The intensity and type of risks depend on where they work (recycling centers, warehouses, on the streets or in garbage dumps), on their working conditions (informal or organized groups), on the nature of the waste (composition, components and decomposition), and the duration of their exposure [5, 6] . A study in Latin America in 2018 pointed out that the most common reported diseases among waste collectors were osteomuscular disorders (78.7%), arboviruses (28.6%), episodic diarrhea (24.9%), hypertension (24.2%), bronchitis (14.3%), intestinal worms (12.6%) and diabetes (10.1%) [7] . Lucia Botti et al. also found that the waste collectors are affected by low ergonomic conditions and highly risk of musculoskeletal disorders [8] . These are the major reasons that bring waste collectors to access health services.
In Vietnam, there have been several studies on the status of musculoskeletal disorders, occupational accidents and related factors among waste collectors [9, 10] , but there have been no study of factors affecting the access to health services among waste collectors. To our knowledge, several studies in India, Amhara region, and Northwest Ethiopia [11, 12] have indicated associated factors to musculoskeletal disorders, and occupational injuries among waste collectors, many factors affecting the access to health services among waste collectors-such as geographical accessibility, the availability of health facilities, the acceptance of the quality of the health services, health insurance, and affordability remain unexplored. In this study, we aimed to understand factors which are affecting the access to health services among waste collectors in Hanoi, Vietnam and are not well reported in previous studies.
This was a cross-sectional study. In our study, the qualitative research was used to understand factors affecting the access to health services among waste collectors in Hanoi, Vietnam. Qualitative research was receiving recognition and was increasingly used in health care research with social and cultural dimensions [13] . Qualitative research was an umbrella term covering an array of interpretative techniques which seek to describe, decode, translate and otherwise come to terms with the meaning, not the frequency, of certain more or less naturally occurring phenomena in the social world [14, 15] .
The subject of the qualitative research focused on waste collectors. They were living and working as waste collectors for more 2 months in Bac Tu Liem district, Hanoi, Vietnam. They were in good state of health and agreed to provide information for this study. The details regarding study participants were reported in section 3.1. Patton (2002) stated that there are no rules for sample size in qualitative inquiry. In other words, sample size depends on the aim of the study and what is possible, given the time and resources available [16] . In our study, a total of 30 waste collectors had in-depth interviews and 19 waste collectors attended the focus group discussions. Three focus group discussions with 6 to 7 waste collectors each were conducted. Snowball sampling was a recruitment technique in which waste collectors in our study were asked to assist researchers in identifying other potential subjects.
The study instruments included in-depth interviews and focus group discussion guides to understand factors affecting the access to health services among waste collectors in Hanoi, Vietnam. We developed the study instruments that are based on Tanahashi framework for effective coverage with health services [17] . For in-depth interviews and focus group discussion guides, these were the main questions: How did you assess geographical accessibility to health facilities? Which health facilities were available at your residence and outside residence? Among those you have listed, how did you assess infrastructures, human resources, medical equipment, medicine, and administrative procedures? How did you assess the quality of health services at the health facilities? Did you have health insurance? How did you use the health insurance? What was your ability to pay for medical services? How much could you afford to pay for it? What were the recommendation for better improvement of the access to health services for waste collectors? These questions were supplemented with any relevant sub questions pending the answers received.
Waste collectors were invited to participate in the in-depth interviews and focus group discussions. They also introduced other potential subjects to the research team. All participants received a clear explanation of the study objectives before participating. Their participation in the study was totally voluntary. They could refuse to provide information any moment during the interview without any penalty. An audio recording could only be done after having the consent from the interview subjects. If a participant did not agree to an audio recording, the researchers took detailed notes and captured illustrative quotes. During the survey, interview recordings, field notes, and visual materials were collected. The interviewees' personal identities were coded and quoted anonymously. We spent at least one hour for each in-depth interview and one and a half hours for each focus group discussion.
Regarding to the quality control of the study, the guidelines of in-depth interviews and focus group discussions were developed, piloted on-site by the research team, and revised/finalized before the official data collection process. The data collection in the field was conducted by 2 research team members who are working for the Hanoi University of Public Health, have the background of public health or social work, and have at least 5 years of experience in qualitative research. As regards focus group discussion, there were 2 research team members joining, in which 1 member guided focus group discussions, and the other noted the information and meeting minute.
All interviews and focus group discussions were recorded through an audio, transcribed and dataprocessed with software Nvivo 7.0 according to the study themes. We used thematic analysis to achieve the study's objective. Data were also analyzed by 2 research team members and the analysis was carried out in double.
The study was approved through the Institutional Review Board of Hanoi University of Public Health (HUPH), with Decision No. 017-315/DD-YTCC. All individual participants included in the study gave informed consent.
The study results show that the majority of participants were Kinh. They came from the neighboring provinces/cities of Hanoi such as Hung Yen, Nam Dinh, Vinh Phuc, Thai Binh province (over 30 persons). Their age varied from 30 to 65 years old, mainly female (9 male, and 40 female providing the information). There was no difference between male and female, or between the immigrants and the local population in terms of education level. This target group had a relatively low level of education. Most of them completed secondary school and did not follow any religion, only a few follow Christianity.
In addition, all participants in the study were married, and all local people doing this work lived in their own homes with other family members such as their spouses and children. For the immigrant residents, they all lived in a boarding house/rented accommodation. Most of them lived with their spouse or co-workers. Individual index between two genders also varied, in which female waste collectors' heights were from 1 m 50 to 1 m 65, weighing from 40 kg to 55 kg. Meanwhile, male waste collectors were about 1 m 60 to 1 m 70 tall, and their average weight was about 60 kg. According to the study participants who were local, they were more advantageous than the migrants in geographic access to health services. When having common a sickness such as flu, runny nose, headache, leg pain, and etc, they could go immediately to health station at commune level, traditional medicine clinic or pharmacies in their residence. One of the participants said: "I have not gone to any hospitals but only to the health station for checking my leg injury due to an accident. It is called Yen Be Health Station, about 100 metres from my home. There is also a pharmacy nearby". However, the local waste collectors rarely had regular check-ups. They came to hospitals at central level such as Bach Mai hospital, St. Paul hospital, and etc only when having severe illnesses, instead of going to the Health Center of Bac Tu Liem at district level. Moreover, most of the hospitals at central level were located relatively far from their residential area.
According to the local waste collectors, their transportation means to the healthcare providers such as health stations at commune level, pharmacies in their residence, and etc were walking or bike because of the short distance and travel time. A participant shared that "I often ride a bike to the pharmacy or the health station, usually taking up to 5 minutes". In case of getting treatment in the Central-level hospitals, they rode a motorbike. For example, a participant noted "I usually travel by motorbike in about an hour to get to the hospital at central level".
As regards the immigrants, their access to health station at commune level or health centers at district level was much more limited. Most of them did not have access to these health facilities since they were not local. Therefore, when they were not severely ill, they preferred to go to private health clinics, pharmacies in Bac Tu Liem district by the personal means of transportation such as bicycle, motorbike, or walking, instead of getting examination and treatment at public health facilities at commune or district level. One of the participants stated: "I often go to the pharmacy nearby my home for common illnesses such as flu. I will go to 198 hospital or St. Paul hospital or Bach Mai hospital at central level when having more serious illnesses".
Similar to the local, the immigrants would also seek support from bigger hospitals at central level. In case, their health conditions got worse and they could not continue working. They went back to their hometown to prepare for diagnosis and treatment such as treatment fee, belongings, and used public transportation to travel to the hospital at central level to reduce travel costs. One of the participants said: "I will travel from my hometown to the Bach Mai Hospital by bus. They have from 7 to 8 trips every day; it costs only 7,000 VND (equivalent to 0,3 USD) each trip. Otherwise it is only 1,5 km if go to the commune health station in my hometown".
As regards the public health facilities in the Bac Tu Liem district, there were 2 health clinics at district level and 13 health stations at commune level. However, all of the study participants did not reach these clinics. Therefore, our study was unable to assess the specific infrastructure, equipment, medication, the capacity of health workers, and the administrative procedures of these health clinics based on waste collectors' experience.
According to the local participants, the infrastructure of the health stations at commune level was gradually improving. An interviewee shared that "The Health Station of Yen Be ward is now a renovated 2-storey building. The infrastructure is generally better and more convenient". However, the immigrants still considered the facilities of the pharmacies and private clinics that were better. As noted from one of the participants: "I still think the private pharmacies are more convenient, and better equipped. There are plenty of them around here".
Besides, there was not much of difference in the assessment of the local and immigrants in terms of the health facilities' infrastructure outside Bac Tu Liem District as well as the Central-level hospitals. As noted in their evaluation, all of the Central-level hospitals have excellent, modern infrastructure, being credible places for diagnosis and treatment. A participant shared "I took my wife to Bach Mai Hospital once. We had very good impression of the hospital. The hospital has a nice infrastructure and a clean environment". Meanwhile, both the local and immigrants did not highly recommend the infrastructure, medicine of the health services providers in Bac Tu Liem district such as health stations, private clinics and pharmacies. The main reason for this was the lack of medical facilities or synchronous investment in medical facilities. The medicine quality was limited and did not fulfill the study participants' demands. One participant said: "The health stations at commune level are certainly nearer to our home, but the medical staffs are not qualified enough. Most of them also recommend us going to the hospitals of higher level, since they do not have enough facilities there". The differences in health providers in Vietnam were described elsewhere [18, 19] .
However, in case their health problems affect their labor productivity, whether they are immigrants or local, they would still go to the Central-level hospitals instead of the Bac Tu Liem district's health service providers. According to them, the equipment, medicine and especially the qualifications of the doctors in hospitals at central level were better than those in health stations at commune level, and health centers at district level. One of the participants mentioned: "The Central-level hospitals are perfect in my opinion. All of the doctors have doctoral degree with high professional skills, which the district-level or communal-level health providers can not match". In addition, according to the assessment of the study participants, the ability to solve the health problems of the health providers in their current residence/Bac Tu Liem district such as health stations at commune level, pharmacies, and private clinics were just average. Besides the mentioned limitations, the study also found that there were still a number of advantages. The current administrative procedures were relatively fast with no frills or long waiting time. This became one of the positive impacts on waste collectors' access to health services in the health facilities of their current residence in Bac Tu Liem district. One of the participants stated: "There is not a much administrative procedure in the health stations at commune level. Most of the doctors are old. They are very friendly and willing to answer our questions. They also handle tasks very quickly".
The majority of the participants in our study suffered from musculoskeletal disorders. Some of them had other diseases such as trachea, dermatitis, and gastrointestinal tract. One participant shared: "I often have back pain, knee pain and shoulder pain". Moreover, the job caused waste collectors tensions, fatigue, and depression. Many of them had insomnia due to their hard work. A participant said: "I have had insomnia for 5 years due to my heavy workload and stress about my husband's sickness". Therefore, waste collectors had the needs of medical care services utilization.
At the health service providers, the staff's attitudes and behavior were at the forefront of a patient's satisfaction. According to waste collectors, they did not face any difficulty or discrimination in communicating with the medical staffs in the health service facilities inside or outside Bac Tu Liem district, whether they are local or immigrants. One of the participants said: "My treatment at the Vietnam Cuba Hospital was excellent, as they specialize in Otolaryngology. I was very satisfied with the staff's friendly attitude and good service".
At the health service providers in Bac Tu Liem district, for the local participants, they were quite satisfied with the health services provided by health stations at commune level, particularly the services of the preventive health care, epidemic prevention, vaccination for children, pregnant women, and etc. One of the participants mentioned: "I used to go to Minh Khai Health Station for a gynecological examination. The staff was very friendly and attentive; the examination quality was also very good". However, the pharmacies or private clinics were the frequent destinations for the immigrants who work as waste collectors. A participant shared "The on-request examinations at the private clinics are more cost-effective, less time-consuming with not too many procedures, and we can receive early diagnostic results. Because there are not too many patients at the clinics, and we can have any type of examinations we want". At the large hospitals outside Bac Tu Liem District, both immigrant and local waste collectors were satisfied with the service quality, especially the medical staff's attitudes. One of the participants shared: "The doctors at 108 Central Hospital were very attentive and gentle, they instructed me every step of the examination".
Nearly half of the study participants did not have health insurance. Most of them were the immigrants. A participant shared: "The first reason is that I do not often get sick. Besides, I am not financially sufficient, so I decided not to buy the insurance". Another participant said: "We are forced to buy for every member of their family, so it is still unaffordable".
In addition, unlike the local participants, it was difficult for the immigrants to use health insurance. Therefore, most of them accepted to spend money on unregistered treatment in case their health conditions need treatment. A participant mentioned: "I often use the services not on correct line, so I do not dare using the insurance. Using the insurance on correct line is very good, but for cases like me, waiting for the insurance is very time-consuming and takes a lot of procedures. That is why I still prefer treatment without the insurance". As for the local waste collectors, they underestimated health insurance: "My husband uses the insurance to cover from 2 to 3 million VND (equivalent from 86,34 to 129,51 USD) of medicine cost, except for several costly medicines that were not covered by the insurance.". In addition, they thought that there were also many ways to relieve their illnesses without having to go to the hospitals, so the health insurance was not very necessary. A participant noted: "I choose to use the traditional medicine and found them effective, which cost from 80,000 to 100,000 VND (equivalent from 3,5 to 4,3 USD) per dose" or as shared from another waste collector: "I grew the plants and produced the traditional medicines myself to treat the stomachache. I used to go to the hospitals but switched to this method from 2 to 3 years ago. Therefore, I did not need the health insurance".
Most waste collectors expressed that if they were seriously ill, they would try to earn or borrow several million VND to pay for treatment. However, if the amount was above ten million VND (equivalent to 432 USD), they would not be able to afford the treatment. Therefore, waste collectors were afraid that they would fall sick, especially the cases in which they were the mainstay of their families. A participant shared: "Despite being very healthy right now, I am still concerned as I am getting older. I am not sure about my health in the future. If I ever have a terminal disease, I might not be able to afford the treatment". Shared a similar concern, a participant said: "I could afford to pay for the treatment of common illnesses. It is more severe, I would not able to afford in cases of terminal disease".
In fact, the majority of study participants did not have severe diseases that need treatment. However, in case they have a serious illness, they could not afford treatment and would try to borrow money from their relatives, friends and neighbors (whether they have or do not have health insurance).
The study results show that the majority of waste collectors in our study was Kinh ethnic, originating from neighboring provinces (over 30 people). The working age of the target group was from 30 to 65 years old, mainly female. The results show that the target group's education level was mainly secondary. Some studies in Brazil and in other countries found some similar results that most of the waste collectors were women, with over 30 years old and low level of education. They had few job options other than waste picking due to their high illiteracy and low education level [1, [20] [21] [22] [23] .
In addition, all study participants were married, and the majority of them lived with their spouse. This was similar with previous studies in Vietnam that nearly 90% of the participants were married and living with husband/wife [9, 10] . Moreover, the study shows that the local waste collectors lived in their own house, and the immigrant residents stayed in rental accommodations. Therefore, many of them such as the immigrants lived with their coworkers or relatives, and their children. This is one of the new aspects that has not been mentioned in any previous study in Vietnam. Our study also found that the musculoskeletal diseases accounted for the majority among waste collectors. This was consistent with the findings from previous studies. Musculoskeletal problems are common among waste collectors such as muscles, joints, tendons, ligaments, bones and the localized blood circulation system [4, 24, 25] . In fact, waste management encompasses many activities including collecting, classifying, recycling and selling garbage. There are many risks involved in this process, from the collection site, during transportation, and at the sites of recycling [26] . These risks effect to waste collector's health problem. A noteworthy point of this study is that waste collectors had mental health problems due to life pressure and job risks.
At the time of the in-depth interviews, waste collectors still underestimated the availability of community-based clinics, mainly due to the lack of medical equipment, asynchronous infrastructure and limited quality medicine. However, in reality, the infrastructure, equipment, and medicines at the health center at district level and health stations at commune level were being improved and rebuilt such as the health station of Phuc Dien ward. Despite being unable to timely meet the people's needs of healthcare in the district, this is still regarded as one of the innovative steps in the local healthcare activities. According to the interviewees, the equipment, medicines and especially the professionalism of doctors in hospitals at central level were better than the health stations, pharmacies, private clinics, and etc. in Bac Tu Liem district. This made it even more common for the migrants or local people to go to central hospitals instead of going to healthcare facilities at their current residence/Bac Tu Liem district.
The waste collectors, who were interviewees had no the routine of regular health examination due to their limited budget. This did not allow them to prioritize the disease prevention. They might know that the regular health examination and early detection would lead to less money to spend, shorter time for treatment and higher recovery possibility. A study by Bogale indicated that in low-income countries, waste collectors have low socio-economic status such as poverty, poor housing conditions and poor nutrition that impacts to their regular health examination [4] . Routine checkup program for all waste collectors is mandatory to keep them safe and secure [27] [28] [29] .
Nearly half of the participants did not have health insurance, mostly the immigrants. This differed from the study of Hoang Thi Ngan in 2017 that the majority of the waste collector are formal workers having health insurance and social insurance. These are paid by their company [10] . As informal workers, waste collectors have difficulties to have access to social benefits such as health insurance, pensions and unemployment insurance [30] . Unlike the local people, it is difficult for the migrants to use health insurance. Because each time they want to use health insurance at registered level, they have to go back to their hometown and follow the procedures which are pretty cumbersome in their opinion. Therefore, most of them accepted to spend money for treatment at unregistered level (for cases that need treatment). The local people, on the other hand, underestimated the health insurance. From their viewpoint, the insurance did not cover the needed medications which are pretty expensive. For those who have health insurance, they were still very concerned since the cost of healthcare service covered by health insurance was relatively low. Therefore, the waste collectors must pay extra for some medicines and healthcare services. In particular, the more severe the disease is, the more extra cost they have to spend on medicines. This has an impact on their family's economy while these objects are financially unstable. If they are ill, they are no longer healthy to continue their work, and the economy will be a difficult problem to solve, turning to a burden on not only themselves but also on their families. Previous studies pointed out that it would be more difficult for the workers including waste collectors to access to health services if a disruption occurs in the system of healthcare service [31] such as COVID-19 pandemic. This may be explained that waste collectors are one of vulnerable groups in society and need to supported in emergency situation.
Our study is also subjected to several limitations. First, we used snowball sampling to recruit the participants; therefore, it could be a bias on gender. Second, we did not have an interviews with stakeholders in our study. Third, we did not categorize more details about the different health care services offered by health providers. Lastly, our findings may not generalizable to waste collectors in the whole country of Vietnam. Therefore, there is a need for future studies to address these limitations.
In conclusion, waste collectors were coping with difficulties to access to health services such as geographical accessibility, the availability of health facilities, the acceptance of the quality of health services, health insurance, and affordability. We call for prospective studies to confirm our findings. Moreover, our results may suggest that stakeholders in the field of health and labor, social affairs need to manage the group of waste collectors in community and pay attention to the difference between the local and immigrant in terms of access to healthcare services.
|
and diarrhea (3) . It is not uncommon for viral infections to cause skin rashes, for example, measles, rubella, and dengue fever all cause viral exanthems. However, the prevalence and pattern of cutaneous involvement with COVID-19 are unknown. Guan et al. described 2 (0.2%) patients developed skin rash in the 1,099 patients enrolled (4) . However, the study did not describe the detailed skin manifestation, cutaneous symptoms, timing of the symptom onset or their criteria to diagnose the skin lesions and enrollment into the dataset. Since viral exanthem is not uncommon in viral infections, we were curious about skin manifestations in COVID-19. Meanwhile, we are keen to explore if there is a distinctive cutaneous feature that can help us differentiate coronavirus disease (COVID-19) from other viral infections (5) .
In Italy, COVID-19 has claimed over ten thousand lives, including more than 60 doctors. We (7), whereas viremia of the parvovirus B19 ends before the onset of skin rash (8) . Hence, the dynamic viral load and its reference to skin rash can become a vital clinical clue for the clinicians to determine the optimal timing (before, during, or after the skin rash) to collect the samples for molecular identification.
As we have observed the heavy burden of triage and shortage of essential medical goods posed by the outspread of COVID-19, the introduction of an easy clinical assessment tool like classic COVID-19 skin manifestation is a novel path to cope with the challenge that we are facing during the pandemic. However, this will take more studies to build up the validity and reliability.
Dermatology's outlook in the COVID-19 is multidimensional, starting from the pathogenesis, public health issues to applying new technologies in clinical practice, the opportunities are infinite.
Most importantly, we dermatologists as part of the medical community should contribute our unique perspective in the battle against this formidable pandemic.
|
Policy makers as well as the scientific community are paying increasing attention to food systems. Even though there is not a universally accepted definition of what a food system is, the framework of Bene et. al. 2019) outlines the main challenges in relation to feeding a world today and in the future under environmental constraints. In this framework, the global food system is seen as an interconnected set of activities including input supply, production, postharvest storage, processing, distribution, marketing and retail, and consumption where the impact of food on health, cultural identities, governance and economics, and sustainability, play a prominent role.
Our current food systems are at increasing risk of failing us. Major failures are related to production and nutritional targets, inclusivity and environmental footprint. To address the challenges, many initiatives and targets have been proposed. Unfortunately, progress on many of these goals is patchy, and we are not on track to achieving them. For example, in relation to healthy food systems, we are not reducing child undernourishment fast enough to achieve the WHO Global Nutrition Targets in sub-Saharan Africa, the Pacific and Central and South Asia (Kinyoki et al., 2020) . In relation to climate resilient food systems, we are falling short on taking the actions needed to limit global warming and we may be on track to a 3.1-3.7 • C warmer world, which would be disastrous for food systems (du Pont and Meinshausen, 2018). Many food system actors are highly vulnerable: there will be at least 700 million small-scale agricultural producers in 2030, for example, and we are not on the right pathway to build their resilience to extreme events within a short period of time.
There is a large literature on the idea of reconfiguring food systems. Some argue that major changes in governance and use of natural resources are required (Neufeldt et al., 2013) , fostered through a pro-poor and inclusive structural reconfiguration (FAO et al., 2019) , including gender-based approaches (Wong et al., 2019) . Some documents list a menu of different actions (Searchinger et al., 2019a) , and others present syntheses, highlighting that food systems changes need to be driven by social, environmental, and economic progress (Meridian Institute, 2020). There is broad agreement in this literature that current trajectories are not going to be enough to meet the Paris Agreement and the Sustainable Development Goals, and that the current pace of change is worryingly slow (EAT-LANCET Commission, 2019; IPBES, 2019; FAO, 2018; FOLU, 2019; De Cleene, 2019; Dury et al., 2019; Government of Norway, 2019; Steiner et al., 2020) .
Drastic changes in food systems are essential if we are to achieve a food-secure and sustainable future. What might feasible pathways to such a future look like, and what might they involve? Some idea can be gained from looking to the past. Many periodizations of agricultural history are possible; combining elements of culture (Bentley, 1996) and production (Grigg, 1974; Grinin, 2007) we highlight three of several great reconfigurations:
• Sedentarisation, allowing for seed and livestock domestication (first starting in Western Asia about 12,000 years ago); • Diffusion characterized by agricultural expansion, and local innovations and practices dispersed by the development of large complex civilizations, conquest, mass migration, and international trade (up to 500 years ago); • The great acceleration, driven by widespread invention and innovation (starting with the scientific revolution of the 16th century and on-going).
For such reconfigurations, human societies needed to reroute themselves onto new trajectories. Early agriculture made it possible for relatively large concentrations of people to live in close proximity, giving rise to large communities and the division and specialisation of labour. The diffusion of crops, livestock and technology such as irrigation and the plough brought about substantial gains in productivity and enormous economic opportunities in some parts of the world. The great acceleration saw the continuing replacement of labour with capital on a massive scale and substantial increases in food availability for burgeoning human populations.
Each of these reconfigurations was also accompanied by great environmental, social and cultural challenges. For example, sedentarisation led to the need to develop new social structures capable of organising cities made up of thousands (later, tens of thousands) of people. Dispersal brought with it exploitation of indigenous societies and transfer across continents of many infectious diseases for which there was no natural population immunity (e.g., Columbian Exchange and spread of Bubonic plague from China to Europe). The great acceleration has involved large yield increases per hectare and land expansion, with many environmental problems arising as a result. Historically, great food systems reconfigurations have involved four interlocking elements, broadly speaking: rerouting old systems onto new trajectories; the emergence and treatment of new socio-cultural issues, as a result; the emergence and treatment of new environmental issues; and realignment or reinvention of the "enablers of change", such as the policies, regulatory frameworks, financial mechanisms and innovation systems needed to make new food systems function.
The change we need in food systems today is of the same order of magnitude as these historical reconfigurations. These reconfigurations have been long, drawn-out processes. We do not have the luxury of centuries of time. By ratifying or acceding to the 2015 Paris agreement, 188 countries and the EU have agreed that these reconfigurations need to happen in the next ten years, if we are to achieve zero hunger, gender equality, and avoid dangerous climate change. Is such rapid, deepseated change even possible? From 2018 onwards, the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) worked with partners to consider how to achieve this rapid, deep-seated change in food systems. Background papers on strategic areas to foster these reconfigurations were developed and presented at international events accompanied by deep discussions with over 1000 stakeholders from all over the world. More than 100 partner organizations engaged in participatory processes to evaluate and sharpen this strategic agenda, culminating in the report of Steiner et al. (2020) .
That report highlights action around the four interlocking elements identified above: rerouting farming trajectories; increasing the resilience of all the agents involved in rapid change (reducing risks); minimising the environmental footprint of food systems (from a climate change perspective, a focus on reducing emissions); and realigning the enablers of change. Some glimpses of the type of changes needed in each action area are highlighted below.
There are few examples of the simultaneous, relatively rapid and large-scale changes that are needed to reconfigure food systems. One example is the Tigray experience in Ethiopia. Through community work and local leadership, an epicentre of starvation was transformed into a self-sufficient and green region contributing to higher crop yields. More than one million hectares have been restored in Tigray allowing farmers to produce fruits and vegetables even in drought years (Thornton and Kristjanson, 2018 These examples go all the way from presenting a Nigeria and US agriculture technology social enterprise (Hello Tractor, 2018), demonstrating how to reroute farming trajectories by supporting rural reinvigoration (Cabral and Sumberg, 2017) , to highlighting the role of The African Risk Capacity as a mean to address social challenges through actions to reduce risk in agriculture in order to increase the resilience of smallholder farmers (www.africanriskcapacity.org).
Similarly, initiatives to minimise the environmental footprint of food systems through the reduction of emissions from diets and value chains are presented including plant-based meat alternatives. The industry interest, through an increase in investment towards new alternative protein start-ups, shows that there is potential for significant growth in this sector (Sexton et al., 2019; Byrd, 2018; O'Neil, 2017) . Finally, the AGRI3 Fund, which aims to channel US$1 billion for sustainable agriculture and forest conservation, is a good example of (Steiner et al., 2020) .
realigning enablers of change in order to unlock billions in sustainable finance (Millan et al., 2019) .
These are just a few examples of many, which illustrate the breadth and type of changes that will be needed for food system reconfiguration at scale (Steiner et al., 2020) .
One of the key challenge to reconfiguring food systems is the enormous variability in farm types and farming systems; it is often difficult to generalise from one farm household to another, and there are no "silver bullets" yet identified that will lead to beneficial impacts in all situations (Fraser et al., 2006; Keating et al., 2014; Scoones et al., 2020) . Climate change and many other change drivers are already bringing about reconfigurations in farm households in some places -for example, in response to crop and livestock suitability changes and to market signals (Vermeulen et al., 2018) . Interventions need to address current needs and future aspirations of farming households, as well as ensuring that economic, social, cultural and environmental benefits are not compromised, now or into the future. Appropriate targeting will help to improve the efficiency of the agricultural development process and avoid unintended omission of particular groups of vulnerable people (Laurent et al., 1999; Lopez-Ridaura et al., 2018) .
Many types of farmer exist, but the following four well illustrate the different sets of intervention and enablers needed to move to environmentally, socio-culturally and economically sustainable food systems in the future (Fig. 2) . For larger-scale commercial farms, of which there are perhaps 70 million globally, pathways will generally need to focus particularly on improving environmental goals. Pathways for smallscale farms (320 million globally) characterized by small plot sizes (<0.5 ha) may focus on increasing their integration into local markets, with some farmers accessing digital information and making better decisions (households that are "stepping up", Dorward et al., 2009) . Extensive farm households such as pastoral and agro-and silvo-pastoral farmers (around 30 million) are often located in environments with high climatic risk; pathways for these households may be more to do with building assets and utilising safety nets to increase their productivity and enhance their resilience (households "hanging in" but in time transitioning to "stepping up"). There are some 150 million lower-endowment small-scale farmers, including urban and niche producers (organic, free range) as well as those who are "hanging in" and food insecure. Pathways that revitalise rural economies and help to provide economic opportunities in urban and peri-urban areas can help those who want to "step up" as well as those wanting to "step out" of agriculture to engage in other livelihood strategies (Dorward et al., 2009) .
We know what needs to be done: Steiner et al. (2020) identifies the action areas, the actions, the potential partners, the where and the how. Indeed, some of the partners in the initiative, including the UN's World Food Programme (WFP), the World Business Council for Sustainable Development (WBCSD) and the Department for International Development (DFID), are already taking on board the recommendations arising from this initiative to develop their own strategies. However, much more needs to be done, at much broader scale.
What then are the things needed to make this reconfiguration happen? First, we need deeper understanding of plausible, inclusive trajectories of change at local and landscape levels that embrace the variability in farms and farmers. Second, we need the finance. As Herrero and Thornton (2020) point out, governments globally came up with more than USD 8 trillion in the eight-week period from mid-February to mid-March 2020, in response to the COVID-19 pandemic. This shows what is collectively possible when faced with a grave, existential threat; food system configuration has been estimated to require USD 2-3 trillion by 2030 (UNEP, 2016; Searchinger et al., 2019b) . Third, we need, now more than ever, the collective will to change. Some of us are producers, but all of us are consumers. Current events are teaching us some extremely hard lessons that need urgently to be applied to our food systems.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
response to the Ebola outbreak [1] . Reformation of WHO to ready it to lead responses to future health emergencies is one area of active debate.
Chan will step down from WHO on June 30, 2017 after more than a decade in the post. The process for choosing WHO's next leader has begun, promising to be protracted and rigorous as befits the importance of the role. Factoring in the many influential stakeholders in the process of appointing Chan's successor, however, transparency of the selection process may be one area unlikely to attract plaudits. Although too soon to speculate about the identity of WHO's next Director-General, it is worth reflecting on what qualities an incoming leader should bring to WHO and how that person might need to conceive changes in the structure and behavior of the organization against a landscape of important and evolving threats to the health of the fastgrowing global population.
Instead of electing a new Director-General, Lorenz Von Seidlein of Mahidol University, Thailand, argued that "the problems. . .are now so deeply ingrained that replacing the WHO with new, more appropriate organizations is the logical solution. . .at a fraction of current cost, free of cumbersome, archaic obligations and entitlements and [with] an ability to respond to new problems." This viewpoint is indicative of the strength of feeling that WHO's deficiencies have come to evoke in some of those committed to the cause of improving the health of people in low-income and middle-income countries. But this perception acknowledges that an accountable global body will always be needed to promote, set standards in, and evaluate progress toward better health for people in all countries. The next Director-General will need to heed critics of the organization and craft a process of streamlining and restructuring to produce a new WHO that is demonstrably effective in leading responses to threats to health, and efficient in doing so. As Gostin commented to PLOS Medicine, "WHO urgently needs a bold reform agenda to fix long-standing problems recognized by every independent group that has evaluated the Organization." Political machinations and the enemy within, bureaucracy, are likely to impede reform. For example, WHO's regional and country offices are seen by some as unaccountable, yet the agency of the future will need to be connected and responsive to the resources and needs of all constituent countries. As Gostin also noted, "[WHO] has failed to include civil society in its governance, unlike. . .newer organizations."
WHO's next Director-General should be a proven leader and advocate, perhaps from a lowincome or middle-income country. The new recruit will be greeted by a full in-tray, and featuring prominently are likely to be the constraints imposed by WHO's current funding mechanisms. A substantial proportion of WHO's existing budget is earmarked for specific projects, leaving the organization with little financial flexibility to respond to unanticipated demands. However, any improved funding mechanism is likely to follow, and be dependent on, organizational reform. According to Kruk, "WHO is both essential and hamstrung. . .the election of the Director-General should be a moment for member countries and other funders to reflect on whether they want an implementation agency for their favored health agenda, or an independent institution with the intelligence, agility, and operational capacity to tackle the coming global health challenges." Above all, the incoming leader of WHO will need to be open-minded and creative. More than one of the experts we contacted emphasized the fluid nature of the threats to human health to which WHO should shape the world's response. WHO must be able to lead responses in some areas of global health, but, in other areas, working together with more nimble and focused organizations will be pragmatic. Large-scale infectious disease outbreaks are continuing, and noncommunicable diseases, including cancer, dementia, and mental illnesses, are growing in prevalence and increasing demand for treatment and care. The resources and ingenuity of researchers and clinicians will need to be harnessed, and interventions adapted to new settings, with much greater dynamism. The secular issues of population ageing, conflict, climate change, migration, and others will produce health problems that only an organization with a global reach, responsible to all, can hope to meet. We look forward to welcoming a new leader for WHO with the energy and vision to remold the organization to meet the health needs of the world's people and societies for the 21st century.
|
As the world has become increasingly interdependent and connected, disasters and crises that happen in one single place can significantly cause general economic and tourism specific effects to a broader area or worldwide. During the past decades, the tourism industry has suffered multiple disruptive events, including the outbreak of foot and mouth in the United Kingdom (2001), the September 11 terrorist attack (2001), the Indian Ocean earthquake and tsunami (2004) , and the global economic crisis in 2008/2009, etc. Also, epidemics, such as the epidemic of the severe acute global travel and tourism sector at risk, and countries that are heavily exposed to the pandemic were the hardest hit [20, 21] .
The nexus between COVID- 19 and tourism have been approached under different perspectives. The majority of research has estimated the effects of the COVID-19 pandemic on travel and tourism, where impacts on the aviation industry, tourism, and socio-economy were assessed [21] [22] [23] [24] [25] [26] and the willingness to pay to reduce negative effects of the pandemic were estimated [27, 28] . Some studies suggested potentials of interdisciplinary research collaboration with regards to COVID-19 [29] and discussed the relationships between tourism and sustainable development through the lens of the crisis [30] . Tourism, by the nature of its system, has experienced the impacts of COVID-19 but it has originally contributed to the spread of the disease. Thus, some papers, in contrast, have accessed the effects of the travel and tourism on the number of COVID-19 cases [31, 32] or estimated the effectiveness of travel restrictions on the spread of coronavirus disease [33, 34] .
The impacts of COVID-19 on the tourism industry could be observed or projected. The majority of researchers have approached the tourism effects of COVID-19 with observed influences. Using daily data of global COVID-19 cases and flight volumes, Gössling et al. [21] have visually represented the negative impact of COVID-19 on the number of global flights. The study also compared the impacts of COVID-19 to previous global infectious diseases and explored the effects of the pandemic on the socio-economy and tourism. At a national level, the impact of the pandemic on aviation industry and tourism accommodation in Malaysia are accessed by Karim, Haque [23] . Dinarto, Wanto [25] reported the impacts of the crisis on tourism sector in Bintan, an island in the Riau Archipelago, Indonesia, while another study conducted by Correa-Martínez, Kampmeier [24] discussed the spread of the coronavirus in a skiing area in Austria. The observed impacts of COVID-19 on tourism are basically conducted by using descriptive statistic methods.
In terms of the impacts projected, a 20 May 2020 press release from the UN World Tourism Organization (WTO) estimated the pandemic would cause a 60-80% loss in international tourist arrivals [19] rather than the forecasted 20-30% decline that was published two months earlier on 26 March [35] . The uncertainty in the predicted impact reflected the continuously unprecedented evolutions of the COVID-19 pandemic during the first half of 2020. In a semi-annual report, the International Air Transport Association (IATA) estimated that the revenue passenger kilometers (RPK) will be decreased by 54.7% compared with 2019 [36] . Iacus, Natale [22] predicted that with the COVID-19 pandemic, in the worst scenarios, travel restriction can impact 1.67% of the overall GDP, generating a loss of US$ 323 billion and 30.3 million job losses worldwide. The World Travel & Tourism Council (WTTC) [20] has estimated the crisis to cause much higher losses, namely over 100 million tourism job losses and total economic losses of up to US$ 2.7 trillion in the year of 2020. The projected impacts are calculated by analyzing data and metadata provided in relation to regular estimates published by tourism organizations.
Projection is a statistic indicating what a value would be if the assumptions about the future hold true. If these estimates are not interpreted with extreme caution, especially for an uncertain development of the crisis, coping strategies could be orientated ineffectively. Unlike methodologies used in projecting the tourism impacts of COVID-19, regression approach to make predictions does not necessarily involve predicting the future. Instead, we predict the mean of the dependent variable given specific values of the independent variables. This approach allows us to quantify the effects from observed variables and some disturbances, test whether the parameters are statistically significant, then use them for prediction and policy purposes. In this study, the relationship between COVID-19 and tourism will be better clarified with econometric models applied.
Being considered a direct predecessor of COVID-19, SARS was first identified in Guangdong, China in November 2002, which then later spread over many Asia countries and regions. Outside of China, Hong Kong and Taiwan experienced the most serious impacts from the SARS outbreak, with 1755 cases and 299 deaths in Hong Kong and 346 cases and 73 deaths in Taiwan [37] . The two economies have put lessons they learned from the 2003 SARS outbreak to good use, with a quicker and more effective response when COVID-19 emerged. As of April 30, when there were more than 3 million infections reported over the world (an average of about 14,000 cases per country), Hong Kong and Taiwan only had 1036 and 430 confirmed cases, respectively. Even though the effect of travel restrictions is immediate and undeniable, not all economies put in place a travel ban quickly and simultaneously. During the time gaps, the magnitude of the impacts of COVID-19 on the number of international arrivals may be significantly different among tourist destinations, which depends on how a destination is perceived as safer.
In order to contribute to a better understanding of the effects of COVID-19 on tourism, with a special investigation on SARS experiences, our study applies panel data regression models to estimate the epidemic-tourism relationship for four countries/economies, including Taiwan, Hong Kong, Thailand, and New Zealand. Specifically, we (1) quantify impacts of the number of COVID-19 confirmed cases on the number international tourist arrivals and (2) decompose data set into different groups of economies to access the differences in impacts of COVID-19 pandemic on tourism between groups with and without SARS experiences.
In the following sections, we provide the theoretical and empirical model structure for constructing the relationship between related-COVID-19 variables and tourism. Section 3 presents the estimation results and discussion. Finally, policy implications and concluding remarks are provided in Section 4.
In this article, panel regression model is employed to depict the relationship between COVID-19 pandemic and international tourist arrivals. A multiple panel regression model is assumed as follows:
where i and t denote, respectively, indexes of the individual (economy) and the time (date), Y is tourist arrivals, and X is a set of independent variables, e.g., the number of confirmed cases, the number of global deaths, exchange rate, and a dummy variable for travel restriction policy. The estimation of the parameters gives the average effect of the explanatory variables on tourism demand. u it is the error term in two components u it = µ it + v it , where µ it is the unobservable individual-specific effect and v it is the random disturbance (for further details, see Baltagi 1995 [38] ). Before conducting the tourism demand function, it is important to establish the correct panel form. The Hausman test is applied to test for fixed and random effects under the null hypothesis (H 0 ), which states that α i is not correlated with X it , in other words, the random effects estimator is consistent and efficient. On the other hand, the alternative hypothesis (H 1 ) supposes that the random effect estimator is inconsistent.
The empirical panel data model is given in natural logarithm form as follows:
ln
The number of COVID-19 confirmed cases (COVID_case) and the date at which the travel restriction policy (Travel_ban) came into effect in the tourist destination are considered to be major pandemic-related variables. As a travel restriction is considered one of the most effective policies for mitigating the virus transmission starting from imported cases, in comparison with other measures [33] , it was used in our estimation. Travel_ban is dummy variable that equals 1 from the date the travel restrictions take effect. As far as the influencing factors are concerned, we also consider the foreign exchange rate (ER) an important determinant of tourism. ER is the amount of money in the tourist destination currency that is required to exchange to one US dollar. Thus, the variable is expected to cause a positive effect on the demand of tourism. Finally, as people usually plan to travel several days or weeks before departures, time lag effect is also included. In this article, we use a lag period of 7 days for the information on daily COVID-19 confirmed cases in destination countries or regions. All analyses are conducted using Stata version 14.0.
We obtain daily data for daily COVID-19 confirmed cases and the number of international tourist arrivals from different official websites of each economy. Due to the availability of data, the sample economies are Taiwan, Hong Kong, Thailand, and New Zealand and the time period we use is 1 January-30 April 2020, except for data of tourist arrivals to Hong Kong (24 January-30 April 2020). Tourism data of Hong Kong is the total daily number of arrivals of non-Hong Kong residents. Data on tourist arrivals to Taiwan and Thailand are passenger volume in Taoyuan airport (total number of passengers) and Bangkok airport (foreign nationals), respectively. For New Zealand, a large file containing all information for very single movements across New Zealand borders is retrieved and filtered for daily data of the total arrivals of non-residents to New Zealand. Data sources are specified in Table 1 . Table 1 . Summary of data sources.
Taiwan CDC (Centers of Disease Control) [39] Hong Kong Government Open Data Platform [40] Thailand DDC (Department of Disease Control) [41] New Zealand Ministry of Health [42] Tourist arrivals (daily number of international tourist arrivals) Taiwan Taoyuan International Airport Co. Ltd. [43] Hong Kong Immigration Department [44] Thailand Immigration Bureau [45] New Zealand's official data agency [46] Data on the exchange rate are retrieved from the webpage of Economic Research Division of Federal Reserve Bank of St. Louis. In response to the continued spread of COVID-19, Taiwan and New Zealand barred foreign nationals from entering their territory starting March 19; Hong Kong closed borders to all non-residents from 25 March; and Thailand put the travel restriction into effect starting 29 March. The summary statistics are presented in Table 2 , which shows that higher means are correlated with higher standard deviations. The variations in tourist arrivals in Hong Kong are more dramatic than the variations in the three remaining. Among the four selected tourist destinations, the average number of COVID-19 confirmed cases in Thailand was highest at about 24 cases per day, followed by New Zealand (9 cases per day), Hong Kong (8 cases per day), and Taiwan (4 cases per day).
Three estimation approaches for Equation (2) Table 3 displays the results of all panel regression models with three explanatory variables only since the RE estimation procedure requires the number of cross-sections to be greater than the number of coefficients. As the Hausman test result shows that FE model is more appropriate, we add one more independent variable, i.e., the daily number of global confirmed COVID-19 deaths, into the FE model. The parameter estimates are presented in Table 4 . Since the logarithmic form was adopted, the coefficients displayed in Tables 3 and 4 are presented as elasticities. Table 4 shows that the coefficients of national confirmed cases and global deaths are all negative and significant. This indicates that the tourism industry in the four sample economies has been decimated by COVID-19 over the first four months of the year 2020. At the 1% level of significance, we find that a 1% COVID-19 confirmed case increase causes a 0.075% decline in tourist arrivals. By multiplying the estimated coefficient times by average dependent variable divided by the average number of confirmed cases, the estimate implies that the daily tourist demand was reduced by about 110 arrivals for an additional person infected by COVID-19. A 1% increase in the number of global deaths caused by COVID-19 resulted in a lower passenger volume by 0.049%. It is worth noting that the impact of travel restrictions on tourism is found to be very significant and negative. Specifically, the average number of international tourist arrivals after the travel ban came into effect in the four tourist destinations is lower than prior periods, by about 321.5%. Finally, sign of exchange rate coefficient is found to be significantly negative, which implies that as the value of destinations' currency decreases, the tourism demand still keeps reducing. This indicates that the tourism markets are exhibiting a lower sensitivity to price changes among tourists selecting the four sample economies as their destinations.
Lessons learned from 2003 SARS experiences have helped Taiwan and Hong Kong's governments to establish an effective public health response mechanism for quickly enabling actions to deal with COVID-19, leading to a very low rate of indigenous transmission in these two Northeast Asia economies. During the epidemic time, the image of such tourist destinations is therefore perceived as safer, in comparison with others without or less SARS experiences. From that perspective, the impacts of the COVID-19 outbreak on international tourism is also expected to differ when tourist destinations' SARS experience is considered. The study goes one further step to estimate the effects of COVID-19 on the number of tourist arrivals in groups of economies with and without experiences and risk perception of pandemic. Data is now decomposed into two different sets: the first set includes Taiwan and Hong Kong, while the second set includes Thailand and New Zealand. In fact, New Zealand and Thailand both had people infected by the SARS outbreak of 2003. However, as these numbers are very small compared with Hong Kong or Taiwan, Thailand and New Zealand are still considered to have less or no SARS experiences. Additionally, according to estimates from the World Travel & Tourism Association [7] in 2019, the contribution of tourism sector in Thailand and New Zealand to their GDP and total employment were higher than those in Hong Kong and Taiwan. From this perspective, the grouping approach is appropriate, relating the consistency in term of the reliance on tourism of each country or region. Fixed effects model is then applied for these two data sets, since (1) the FE model can control for the unobserved heterogeneity and time-invariant issues in comparison with OLS and (2) the model is tested prior to ensure it captures the pandemic-tourism relationship better than RE models. The effects of COVID-19 on tourism demand in two groups of economies estimated using FE models are presented in Table 5 . Related-COVID-19 epidemic coefficients are all found to be significant and negative. However, the magnitude in relation to national cases and global deaths on international tourism within each group and between two groups differed significantly. First, the impact of COVID-19 on tourism in the destinations without SARS experience is found much more severely in comparison with the tourist destinations that have experienced the pandemic. The estimation results indicate that the average tourist arrivals to Taiwan and Hong Kong decreased by 0.034% in response to a 1% increase in COVID-19 confirmed cases, while in Thailand and New Zealand, a 1% national confirmed cases increase caused a 0.103% reduction in tourism demand.
Second, we find that the tourism effect of global COVID-19-caused mortality on international tourism in Taiwan and Hong Kong is stronger than the effects of the number of confirmed cases. For Thailand and New Zealand, the average number of arrivals to a country is influenced by the information about the number of COVID-19 cases in that country, much more than by global pandemic-related mortality. This finding is indicative of a transitive effect of experiences of the SARS epidemic of 2002. As Taiwan and Hong Kong suffered serious losses from SARS, the governments have a quicker and more effective response to the COVID-19 outbreak than other economies with no or less experience with SARS, such as Thailand and New Zealand. This results in how Taiwan and Hong Kong have kept their coronavirus infection rate very low despite their proximity to China. A low infection rate would certainly be one of the major reasons that tourism in these economies tended to be affected more by information about global COVID-19 deaths than by the number of domestic confirmed cases.
Third, exchange rate coefficients are only found to be significantly positive for the first group, which means that currency depreciation in Taiwan and Hong Kong would encourage increasing tourism demand, and vice versa. Finally, travel restriction policy in response to the spread of the COVID-19 pandemic is found to be the major influencing factor for tourism in both groups. However, the effect of a travel ban on tourism decline in Thailand and New Zealand was much stronger than in Taiwan and Hong Kong.
As the tourism sector is one of the hardest-hit by the outbreak of COVID-19, a clear understanding of the relationship between the crisis and tourism demand is critical for the identification of appropriate adaptation strategies to minimize the negative impact of COVID-19 on the economy in general, as well as the tourism industry in particular. In light of the importance of identifying the effects of the COVID-19 pandemic on international tourism, this study contributes to the existing literature the quantified tourism impacts of COVID-19, under the view of SARS experiences, using econometric modeling approach. The panel regression model results demonstrate how COVID-19 could significantly devastate international tourism for four APEC economies. The estimated average negative effect of the COVID-19 epidemic on international tourism is a decline of approximately 110 arrivals for an additional person infected by the coronavirus, during the first four months of the year 2020. As the demand for international tourism is significantly affected by the destination countries' health security, the magnitude of negative impacts of COVID-19 on international tourism would be different between two groups, namely those with and without SARS experiences. Our estimation result reveals that the impacts of domestic COVID-19 cases on international tourist arrivals to Taiwan and Hong Kong were less severe than those in Thailand and New Zealand. For instance, the average tourist arrivals to Taiwan and Hong Kong decreased by 0.034% in response to a 1% increase in COVID-19 confirmed cases, while in Thailand and New Zealand, a 1% national confirmed cases increase caused a 0.103% reduction in tourism demand. Under the same combination of variables in estimating two groups, international tourism demand was influenced by the number of global COVID-19 deaths more than by domestic confirmed cases for SARS-affected economies, but was affected more severely by local COVID-19 cases rather than global mortality in case of non-SARS-affected economies.
The findings of disproportionate impacts of related COVID-19 factors on tourism demand for the two nation groups suggest that hard-won lessons from the past could help governments in retaining risk perception of pandemic to combat the new coronavirus quickly and effectively. As COVID-19 has debilitated the tourism industry worldwide, keeping the destination at a very low infection rate may not protect the tourism sector during the epidemic time. However, this advantage not only assists the economies to minimize the general economic damage but also can definitely help domestic tourism sector to restore more quickly when the pandemic is under control.
In response to the global spread of COVID-19, travel restriction is one of the most effective isolated intervention implemented to slow down the dispersion elsewhere in the world, especially when governments respond rapidly. In this study, we find that travel restriction on tourist arrivals has a larger effect in Thailand and New Zealand, compared to Taiwan and Hong Kong. This result could be explained based on the delay in responding to the pandemic from Thai government. In Thailand, tourism has become one of the only sources of foreign exchange earnings since 1997 [47] and has experienced phenomenal growth in tourist arrivals in recent years. During the study period, Thailand had the largest number of domestic COVID-19 confirmed cases compared to the other three economies. However, while Taiwan, New Zealand, and Hong Kong have prohibited foreign visitors from entering their boundary from 19 March and 25 March 2020, Thai government still opened borders until 29 March. Thus, the underlying reasons for the government's procrastination in imposing a travel restriction could be the lack of pandemic experiences and the heavy dependence of the economy on international tourism. Based on the estimation results, all governments, even tourism-dependent economies, are suggested to take swift actions to contain the spread of the virus, since a delay in response could lead to direct impacts, such as uncontrollable virus transmission, which can cause negative effects for the tourism sector and the whole economy.
The study has several limitations that could be further explored in future research. First, although New Zealand is grouped as a country that has not had much experiences with SARS, and is explained in the same trend of effects as Thailand, the country has thus far appeared to have fared better than Thailand in controlling the disease, with a small number of cases and low transmission rate. In other words, beside the difference in experience of SARS, the effects of COVID-19 on tourist arrivals in the two nation groups also represent the difference in many other influencing factors, such as health care systems, governments' efficiency in taking response actions, tourism market size, and structure of tourists' source countries, etc. Thus, further research is encouraged to consider the nexus between pandemic and tourism more comprehensively to achieve better estimation results. Second, the international tourist arrivals are not only affected by travel restrictions of destination country, but also affected by the time the source countries or regions started limiting their own residents from traveling abroad. Therefore, other approaches that consider travel restriction policy more thoroughly could be useful in providing more appropriate estimates. Third, due to data availability, the number of cross-sections in panel data model was limited to four. Sample countries of non-SARS-affected group could be better selected (countries with large tourism market size, completely no experience of SARS, and severely affected by to investigate the pandemic-tourism relationship through the lens of SARS experience more in-depth. Approaching a larger and better dataset is encouraged in future research. Finally, though tourism industry is usually considered to be resilient, bouncing back relatively quickly after significant events, the effects of COVID-19 on tourism could differ significantly from other crises, as there many potential risks of surging other outbreaks remain once travel restrictions are eased or removed. At the time of writing, a rise in locally transmitted cases in Hong Kong has been recorded, the number of COVID-19 infections worldwide have exceeded 30 million and deaths have surpassed 1 million, and unemployment figures have risen steeply in many countries. Hence, the impact and recovery from the COVID-19 pandemic could be unprecedented and require to be studied more intensively in the future.
On December 31, 2019, when China reported 27 cases of the unidentified viral pneumonia outbreak to the WHO [48] , the Taiwan CDC initiated health checks onboard flights from Wuhan. On January 5, 2020, all travelers from Wuhan in the past 14 days that had a fever or symptoms of upper respiratory tract infection would be screened for 26 viruses, including SARS and MERS [49] . The health screening measures have been gradually expanded to flights from China, and then to all incoming flights (by March). In response to a growing number of infections caused by a COVID-19 in neighboring countries, on January 20, 2020, the Taiwan CDC announced the activation of the Central Epidemic Command Center (CECC) [50] , which has played an important role in recognizing and controlling the health crisis of COVID-19. During the first 50 days of the COVID-19 outbreak, the CECC coordinated comprehensive efforts by various ministries to manage the health crisis to implement more than 100 actions regarding border control, case identification, home isolation or quarantine, proactive case finding, manufacture and allocation of masks, and strategies for enhancing quality of education and reassurance of the public [49, 51] .
The experiences from SARS, beside generating instrumental lessons in disease control measures and policy planning for Taiwan's government, also effectively improve the public epidemic awareness toward the COVID-19. For instance, compared to other economies, the COVID-19 online information-seeking behavior appeared very early in Taiwan [52] , which is very important for public health, as people realize the severity of COVID-19 and proactively increase their epidemic awareness. More importantly, citizens and residents of Taiwan strictly adhere to the government's health guidelines to prevent the spread of coronavirus. The unified support of the public is revealed by significant changes in health behaviors and hygiene practices, such as frequent hand-washing, use of sanitizers, and wearing masks outdoors, especially when using public transportation.
Thus, as a result of the robust system of government agencies and hospitals and public acceptance of protective measures, policies response to COVID-19 is implemented rapidly in the initial stages of the outbreak COVID-19, helping Taiwan to contain the epidemic effectively. As Taiwan has gone many consecutive days with zero new cases of local infection, while maintaining tight controls at the borders, the Taiwan government has planned to relax COVID-19 control across their boundary and launch economic stimulus packages to revive retailers, spur consumption, and promote domestic tourism.
|
The coronavirus disease 2019 (COVID-19) pandemic witnessed several clusters of children with fever and multisystem inflammation resembling Kawasaki disease (KD). Due to the evidence of a preceding severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in most of these patients, post-viral immunological reactions were thought to play an important role in the pathogenesis. 1, 2 The condition, called "pediatric inflammatory multisystem syndrome temporally associated with SARS-CoV-2 infection (PIMS-TS)", has thus far been reported mainly from Europe and the United States, 1,2 and no cases have been diagnosed in Asia. We herein analyzed the clinical data on patients in whom KD was diagnosed during a local COVID-19 epidemic to investigate the relationship between KD and SARS-CoV-2 infections in Japan, which has the highest KD incidence in the world.
The present retrospective observational study was conducted at Tokyo Metropolitan Children's Medical Center (Tokyo, Japan), one of the largest pediatric referral centers in the nation. Children who received a KD diagnosis and were treated at our hospital between December 1, 2019 and May 31, 2020 were included. KD was diagnosed using the Japanese diagnostic criteria. Patients who were treated for KD elsewhere before being transferred to our hospital were excluded. To compare the clinical presentations before and during the local COVID-19 epidemic, the patients were divided into two groups, period 1 and period 2, based on whether the diagnosis was made before or after March 1, 2020. The patients' data were extracted from their medical records. Levels of anti-SARS-CoV-2 IgM and IgG in the sera were measured by chemiluminescent microparticle immunoassay using the iFlash 3000 chemiluminescent immune analyzer (YHLO Biotechnology Co., Ltd., Shenzhen, China) with a cutoff value of 10 AU/mL for IgM and IgG. Cryopreserved sera obtained before and one week after the initial intravenous immunoglobulin (IVIG) treatment
This article is protected by copyright. All rights reserved were analyzed. To evaluate the possible effect of IVIG on post-treatment serum anti-SARS-CoV-2 antibody levels, the anti-SARS-CoV-2 antibody levels in the IVIG preparations used for treatment were measured. The study was approved by our institutional review board (2020b-26).
In total, 44 patients with KD were analyzed; 30 were from period 1, and 14 were from period 2 ( Table 1 ). The median age was 2 years. No patients presented signs of shock. There was no significant difference in the clinical or laboratory characteristics of the two groups except for platelet count. Coronary artery aneurysms were observed in two (5 %) patients at one month after the primary treatment.
Two patients were positive for the anti-SARS-CoV-2 antibody (Table S1 ). Patient 1 was also positive for the anti-Mycoplasma pneumoniae antibody on particle agglutination test. Patient 2 was hospitalized for SARS-CoV-2 infection at our hospital one month before KD diagnosis; KD onset occurred 56 days after COVID-19 onset.
Although the anti-SARS-CoV-2 IgG level in all the IVIG preparations was below the cutoff, it was higher than that in the patients' serum.
Our study revealed that most patients with KD at our hospital had clinical characteristics more compatible with classical KD than with PIMS-TS. Their demographic, clinical, and laboratory features differed from those of patients with PIMS-TS, who are typically older and have a higher incidence of hemodynamic instability and gastrointestinal symptoms and a lower platelet count. 1 No dramatic increase in KD incidence or changes in its clinical features were observed during the local COVID-19 epidemic, unlike in other countries where PIMS-TS is endemic. Only one (2 %) patient was positive for anti-SARS-CoV-2 IgG in contrast to around 80 % of patients with PIMS-TS in Europe. 1
This article is protected by copyright. All rights reserved There was an apparent drop in KD cases from period 1 to period 2. However, the number of patients with KD treated at our hospital in the previous year during the period corresponding to periods 1 and 2 was 32 and 19, respectively, showing a similar trend with the present subjects.
Considering that the incidence of KD is lower in spring than in winter, 3 this drop was assumed merely to reflect the normal seasonality of KD rather than any effect of COVID-19.
Several reasons may explain the lack of PIMS-TS reports in Asia. First, patients with COVID-19 have thus far been much fewer in Asia than in Europe, leading to fewer children having the disease. Second, PIMS-TS apparently shows a predilection for individuals of black or Hispanic descent, 1,2 suggesting that genetic factors may contribute to its uneven distribution. The same trend was observed in Kawasaki shock syndrome, a severe form of KD characterized by clinical presentations very similar to those of PIMS-TS. 4 These ethnic groups may have some genetic factors that make them more susceptible to hyperinflammatory states. Our study incidentally revealed a higher anti-SARS-CoV-2 IgG level in IVIG preparations than in the patients' serum, indicating potential cross-reactivity between SARS-CoV-2 and common human coronaviruses, as previously demonstrated in a study analyzing human sera collected prior to the COVID-19 pandemic. 5 According to recent reports from Europe and the United States, more than half of patients with PIMS-TS are known to present without Kawasaki-like symptoms. 1,2 Because the current study focused only on patients with KD, the exact epidemiology of PIMS-TS was not analyzed. Furthermore, the present report was based on a small, retrospective study at a single institution; nationwide studies are needed to clarify the actual state of PIMS-TS in Japan.
In summary, our study revealed that most patients who received a diagnosis of KD during the COVID-19 epidemic in our catchment area had classical KD rather than PIMS-TS with This article is protected by copyright. All rights reserved Neurological symptoms 0 (0) 0 (0) 0 (0) -
This article is protected by copyright. All rights reserved Peripheral hypoperfusion 0 (0) 0 (0) 0 (0) -
|
A second look at selected compounds is giving new life to several abandoned therapies and new applications for existing drugs [1] [2] [3] . One such example is provided by chloroquine, being dismissed from antimalarial treatment and finding new applications in the clinical management of autoimmune diseases, tumours and nonmalarial infections [4, 5] . The use of chloroquine in the clinical management of a viral infection was first consid-ered in the 1990s, on the basis of its effects on HIV-1 [6, 7] . The drug is now being tested as an investigational antiretroviral [8] .
Some of us previously analysed the reported effects of chloroquine on replication of several viruses and concluded that the drug should be studied as a broad spectrum antiviral agent against emerging viral infections, being relatively well tolerated, cheap, and immediately available worldwide [9] . As a weak base capable of accumulating within cellular organelles, chloroquine appears to be capable of interfering with pH-dependent steps in the replication of several viruses. Other mechanisms of viral inhibition by chloroquine, such as inhibition of polynucleotidyl transferases have, however, been considered [7] . In 2003-2005, chloroquine was studied as a promising in vitro anti-SARS agent [9] [10] [11] and recently entered clinical trials against chikungunya fever [12] .
The broad-spectrum antiviral effects of chloroquine deserve particular attention in a time in which there are several cases of avian influenza A virus transmission to humans from poultry, and the availability of antiviral drugs is fundamental during preparation and evaluation of effective vaccines. Chloroquine inhibition of both type A and B influenza viruses was first described in the 1980s [13, 14] . The concentrations employed in these studies were however too high to allow a theoretical transposition to in-vivo settings. Anecdotal reports of clinical benefits derived from a related compound, i.e. quinine, date back to the Spanish influenza pandemic of 1918/19. However, it was not until last year that the anti-influenza virus effects of chloroquine at clinically achievable concentrations were studied, in view of a possible application of this drug in the clinical management of influenza [4, 15] . Investigations still have to be done on this topic. For example, the mechanisms of orthomyxovirus inhibition by chloroquine have been uncertain at the clinically achievable concentrations adopted in the most recent studies [4, 15] , as well as the effects of chloroquine on field isolates, including avian strains potentially transmittable to humans.
We here report the results of an initial evaluation of susceptibility to chloroquine of human and avian influenza A viruses. Susceptibility to chloroquine appears to be dependent on the pH requirements of the viruses and the electrostatic potential of haemagglutinin subunit 2 (HA2), which is involved in virus/cell fusion. Accordingly, the antiviral effects are exerted at an early step of virus replication.
We first tested the effects of chloroquine on low-pathogenic (LP) A/Ck/It/9097/97 (H5N9) virus, isolated from poultry in Italy. We found that chloroquine dose-dependently inhibited the viral cytopathic effect with a 50% effective concentration (EC 50 ) of 14.38 μM, in cells infected with the H5N9 virus at approx. 10 4 50% tissue culture infecting doses (TCID 50 )/ml (Fig. 1a) . Although this value was rather high, some of the inhibitory concentrations matched the blood concentrations reported in individuals under acute antimalarial treatment (1-15 μM) . The inhib-itory effects were confirmed using quantitative reverse transcritptase real-time PCR (qRRT-PCR) (Fig. 1b) . Ooi et al. (2006) [15] recently reported that chloroquine inhibited human H3N2 and H1N1 viruses with EC 50 values in the range of 0.84 -3.60 μM. To investigate whether the discrepancies with the inhibitory values reported above were due to the type of virus or to the different conditions and methods adopted, we tested the effects of chloroquine on replication of recent human H3N2 and H1N1 viral isolates under conditions similar to those adopted for the H5N9 avian influenza virus. Using the test of inhibition of viral cytopathogenicity, we found that chloroquine inhibited the H3N2 virus (10 4 TCID 50 /ml) with an EC 50 of 1.53 μM (Fig.1c) . Inhibition was confirmed using qRRT-PCR, both under similar conditions and at lower MOIs (Fig.1d) , thus confirming the results of Ooi et al. (the assay adopted by these authors employs a lower MOI than routinely used by our group). Results obtained with H1N1 viruses (10 4 TCID 50 /ml) showed a similar drug susceptibility for the human strain (EC 50 = 1.26 μM), in full agreement with Ooi et al. [15] , but no response to clinically achievable drug concentrations in an avian strain (IC 50 > 20 μM; data not shown). These data suggest a more pronounced inhibitory effect of chloroquine on human H3N2 and H1N1 viruses than on avian H5N9 virus replication.
Since 1) chloroquine is thought to interfere with pHdependent steps of the life cycles of several viruses [9] , and 2) some of us reported different pH requirements in influenza A viruses infecting different avian species [15] , we investigated whether response of influenza viruses to chloroquine might depend on the different pH requirements of the human and avian viruses.
Thus, we analysed the response of human H3N2 virus (good chloroquine responder) and avian H5N9 virus (poor chloroquine responder) to ammonium chloride (NH 4 Cl; 40 mM), a lysosomotropic agent known to increase the pH of intracellular vesicles. Results showed a good response of the chloroquine-sensitive H3N2 virus to NH 4 Cl inhibition of viral cytopathogenicity (100% inhibition under conditions described above). Conversely, the lower chloroquine-sensitivity of H5N9 virus was associated with lack of response to NH 4 Cl (data not shown). This observation raised the hypothesis that cellular pH might be a critical factor for chloroquine inhibition of influenza virus.
To explore this possibility, the action of chloroquine was tested on two avian H7N3 viruses whose haemagglutinins (HAs) differ in two amino acid positions (i.e. residue 261 in the HA1 subunit and residue 161 in HA2, the latter being the HA subunit mediating the fusion process), and which display different pH requirements [16] . The two viral strains showed a marked discrepancy in the response to chloroquine. In particular, A/Mallard/It/43/01 (H7N3) virus, which had been shown to be relatively more independent from pH increase than A/Ty/It/220158/02 virus [16] , was insensitive to clinically achievable chloroquine concentrations (EC 50 > 20 μM). In contrast, chloroquine exerted some inhibitory effect on A/Ty/It/220158/02 replication (EC 50 = 14.39 μM), although the response of this virus to chloroquine was lower than that of H3N2 virus.
The different pH requirements of the two viruses were confirmed by the different responses to NH 4 Cl (0% inhibition for A/Mallard/It/43/01; 30% inhibition for A/Ty/It/ 220158/02). This result raises the hypothesis that chloroquine inhibits pH-dependent events involving HA.
To further explore this possibility, the isoelectric point was calculated for HAs of all viruses used in the present study, and the electrostatic potential was mapped on the protein surfaces of 3D models obtained by homology Figure 1 Inhibition of H5 and H3 influenza A virus replication by CQ in MDCK cells. Cells were incubated with chloroquine (CQ) after virus inoculation or mock-infection and tested for cell viability and viral RNA copies at 24 h post-infection. A) Viability of cells infected with A/Chicken/Italy/9097/97 (H5N9) and treated with increasing concentrations of CQ as detected by colorimetric test. Assays were performed as described in the text. The dotted line indicates inhibition of uninfected cell viability, the solid line indicates inhibition of infected cell viability. Results are presented as the curves that best fit the data points. B) Results of one representative experiment showing inhibition by CQ of A/Chicken/Italy/9097/97 viral RNA production. Virus infected MDCK cells were incubated for one day in the presence of 0, 5, 10, 20 or 25 μM chloroquine. Cell supernatants were used for viral RNA extraction and subjected to a quantitative real-time RT-PCR (qRRT-PCR) assay. Oseltamivir (OS; 20 nM) was used as a positive control. C and D) as in A and B), respectively, using A/Panama/2007/99-like (H3N2) virus. In D) both results obtained with inocula containing 10 4 and 10 3 TCID 50 /ml are reported. Results in B) and D) are displayed for purely representative reasons to show that there is inhibition of virus production, and cannot be compared with each other or with those in A) and C), due to the high intra-and inter-assay variability of the qRRT PCR assay (see Ref. [29] ). Fig. 2 ). Instead, no correlation was found with the isoelectric point of HA1 (P > 0.05; data not shown). Although the viruses studied here belonged to different subtypes, chloroquine-resistant viruses, independently of the subtype, showed a more marked negative surface potential of HA2 than chloroquine-sensitive viruses (Fig. 2) . Viruses with intermediate drug sensitivity showed intermediate characteristics (Fig. 2 ). We conclude that structural determinants in HA2 are associated with response of influenza A viruses to chloroquine.
If the hypothesis of a pH dependent inhibitory action of chloroquine was correct, the timing of drug inhibition should match that of virus/cell fusion, an early step of virus replication occurring in endosomes and requiring a low pH (approx. pH 5-5.5) in several, but not all, influenza A viruses, as shown by previous studies [17] . As the assays for detection of antiviral effects adopted in the first Correlation between electric characteristics of haemagglutinin subunit 2 (HA2) and response to chloroquine of influenza A viruses Figure 2 Correlation between electric characteristics of haemagglutinin subunit 2 (HA2) and response to chloroquine of influenza A viruses. A) Correlation between EC 50 of chloroquine (CQ) on viral cytopathogenicity (presented as Log values, x axis) and isoelectric point of HA2 (pH value at which the protein is neutral; y axis). The line best fitting the data points is shown. Isoelectric points were calculated based on the protein sequence using the web interface in Ref. [35] . B-G) Theoretical three-dimensional models for HA2 subunits of the viruses adopted in the present study, shown in ranked order of sensitivity to chloroquine (from resistant to clinically achievable concentrations to fully sensitive). part of this study were designed to allow multiple cycles of viral replication, we designed time-of-addition experiments using the chloroquine-sensitive human H3N2 virus. Chloroquine was added during virus adsorption onto cells (i.e. time 0; T 0 ) and/or at 1, 2, 3 and 4 h postinfection (T 1-4 ) Using qRRT-PCR, we found that the inhibitory effect of chloroquine was highest when the drug was added at T 0 (inhibition of viral replication corresponding to 89,36%) and at T 1 (inhibition of replication corresponding to 15,53%), whereas the inhibitory activity was completely lost at T 2 .
If this timing was correct, chloroquine should inhibit influenza A replication by a novel mechanism, and therefore exert additive effects in combination with oseltamivir, inhibiting neuraminidase activity at the late stages of viral replication cycle. To test this hypothesis, human H3N2 virus-infected cells were treated with different chloroquine concentrations in the presence or absence of oseltamivir (10 nM). The virus-infected cells were also incubated with oseltamivir alone, EC 50 = 20 nM. Isobologram analysis showed that the two drugs exerted an additive effect (sum of FICs = 1) (data not shown). This result provides further evidence that chloroquine inhibits viral replication by a mechanism different from that of one major anti-influenza drug.
The results so far obtained suggest that chloroquine inhibits the replication of those influenza A viruses requiring low pH for proper fusion activation and that the antiviral effects occur at an early stage of viral replication. Supporting evidence comes from: 1) the more sustained inhibitory effect of chloroquine on those viruses whose haemagglutinins (HAs) were found to require low pH for their fusion activity, 2) common HA2 characteristics such as surface potential and isoelectric point in chloroquinesensitive viruses, and 3) time-of-addition experiments with H3N2 virus.
Chloroquine was found to inhibit a number of cellular processes, some of which do not depend on low pH but might anyway interfere with viral replication. For example, the drug was found to inhibit viral nucleotidyl transferases such as HIV-1 integrase [7] . If chloroquine inhibited influenza A RNA-dependent RNA polymerase, the timing of viral inhibition would not be consistent with that observed in the present study, because RNA replication occurs in the nucleus at later stages [18] .
Based on bioinformatic studies, it was recently hypothesized that chloroquine might inhibit UDP-N acetylglucosamine transferase [1] , a limiting enzyme in sialic acid synthesis. This specific issue has not been addressed here. Nonetheless, if the antiviral effect of chloroquine reported here were due to inhibition of UDP-N acetylglucosamine transferase, the drug should likely have antagonized the antiviral effect of the neuraminidase inhibitor oseltamivir (acting on detachment of sialic acid-bound virions from parent cells), rendering oseltamivir inhibition unnecessary. Instead, chloroquine was found in the present study to exert antiviral effects that were additive to those of oseltamivir.
Chloroquine is a weak base that is known to affect acid vesicles leading to dysfunction of several proteins [9] , and has been shown to inhibit different viruses requiring a pH-dependent step for entry [19] [20] [21] [22] . The results of the time-of-addition experiments, performed in the present study using a recent epidemic isolate of human H3N2 virus, are consistent with chloroquine inhibition of pHdependent steps occurring at an early phase of influenza A virus replication.
Most influenza viruses enter target cells by fusion of the viral and cell membranes at the endosomal pH (approx. pH 5-5.5), although some virus variants can replicate well also at higher pH values [16, 17, 23] . In a previous paper, the differential growth sensitivity of two naturally occurring variants of an H7N3 virus, isolated from mallards and turkeys, to increased pH values was shown to correlate with different fusion properties [16] . Since chloroquine appears to mimic the effects observed on these two viruses when pH is increased, our data support the hypothesis that the step of virus replication inhibited by clinically relevant chloroquine concentrations is the low-pH dependent haemagglutinin-mediated virus/cell fusion, in agreement with the evidence obtained by other authors at much higher concentrations than those used in this study [13, 14] . The correspondence between antiviral effects and isoelectric point of HA2 (i.e. the HA subunit mediating the fusion process) is also consistent with this mechanism. Acidic pH in the endosomal compartment also activates the influenza virus ion channel, M2, that promotes the uncoating of influenza virus in endosomes. However, M2 involvement as a possible target of the antiviral effect of chloroquine is unlikely, since no aminoacid differences were observed in M2 trans-membrane region (the ion-channel domain) between the two avian H7N3 viruses with different chloroquine sensitivities, as already reported [16] .
Although a comprehensive study on the variation of fusion pH requirements of influenza A viruses of all HA subtypes and isolates from different hosts is not available, several authors have documented that the threshold pH, at which the HA conformational change and virus-cell fusion occur, is strain-specific [24] [25] [26] . Interestingly, the viruses showing highest chloroquine sensitivity also displayed the highest HA2 isoelectric points. Thus, a relation-ship between isoelectric point and response to pH is apparent. However, a broad study relating the surface electrostatic potential with inactivation by pH would be required to analyse the molecular details of the HA/pH interplay. Analyses of virus production before and after exposure to chloroquine and of the possible changes in HA2 surface potential in viruses rendered resistant to chloroquine after long exposure to the drug will also be necessary.
Although association between variables cannot be considered to be equivalent to causation, the results of the present study strongly suggest that pH critically determines the antiviral activity of chloroquine by regulating virus/host cell interactions. The potential use of this compound as an antiinfluenza drug should take into consideration the possibility that even within the same subtype, different strains may present significantly divergent sensitivities to chloroquine as a consequence of their different pH requirements. Moreover, sensitivity to chloroquine may vary in different cell populations susceptible to influenza A virus infection, depending on different capabilities of endosome acidification. Mutations affecting the electrostatic potential of the the HA2 protein subunit of various isolates of the same virus could also be relevant. All these factors should be carefully evaluated when hypothesising a potential clinical utilisation of chloroquine against influenza A viruses.
Chloroquine phosphate (7-chloro-4-[4-(diethylamino)-1-methylbutyl]amino]quinoline phosphate, (Sigma) and oseltamivir, a kind gift from Roche was used as a positive control.
After 24 hours of incubation of the virus-infected cells with different concentrations of the test compounds, under the appropriate conditions, pooled aliquots of the supernatants containing free viruses were subjected to RNA extraction and qRRT-PCR.
A one-step qRRT-PCR assay was employed, which makes use of minor groove binder (MGB) probe technology, as previously described [29] . Briefly, viral RNA was extracted from infected cell supernatants (QIAmp Viral RNA Mini kit -Qiagen, GmbH, Hilden, Germany) and amplified by RRT-PCR using primers and probe targeting a highly conserved region of the matrix gene of influenza type A viruses. The influenza matrix RNA was also in vitro transcribed from the corresponding DNA template, cloned into a plasmid vector as previously described [29] and used as standard RNA to generate standard curves for quantification of the vRNA in cell supernatants.
100 μl of 1.5 × 10 5 cells/ml MDCK cells in growth medium was seeded into each well in the 96-well microtiter plate. When the cell monolayer was confluent, the culture medium was removed and cells were washed twice with serum-free MEM. Then, 100 μl of the Type A influenza viruses under study containing 10 3 -10 4 TCID 50 were inoculated in wells and the plates incubated for 1 hour at 37°C in humidified air of 5% CO 2 . The viral suspension was removed and the cells were washed two times; fresh medium containing TPCK-trypsin and chloroquine at different concentrations or NH 4 Cl (40 mM) was then added to culture wells in triplicate. Antiviral activity and cytotoxicity measurements were based on the viability of cells that had been infected or mock infected with influenza viruses in the presence of various concentrations of the test compounds. One to three days, depending on the kinetics of cytopathogenicity, after infection the number of viable cells was quantified by a tetrazolium-salt-basedcolorimetric method (CellTiter 96 AQueous One Solution kit, Promega, The Netherlands).
To measure the anti-influenza effects of chloroquine/oseltamivir drug combinations, cell pellets were resuspended in media containing increasing concentrations of the antimalarial in the presence or absence of oseltamivir. A fractional inhibitory concentration (FIC) was then calculated as the ratio: 50% effective concentration (EC 50 ) of drug A in combination with drug B/EC 50 of drug A alone. The effect was considered to be additive when the sum of FICs was between 0.8 and 1.2, as previously described [8] Time-of-addition assay Monolayers of MDCK cells in 96-well plates were infected with 100 μl of medium containing approximately 10 4 TCID 50 of H3N2 subtype. After 1 hour of adsorption, cell monolayers were washed twice with serum-free MEM and incubated in fresh medium containing TPCK-trypsin and chloroquine at a concentration of 10 μM. Chloroquine was added at the time of infection or at four different time points thereafter. Eight hours post-infection, a time point at which all progeny virus in the supernatants is derived from the first replication cycle, cell supernatants were collected, viral RNA was extracted and the antiviral activity was determined by using the qRRT-PCR described above.
Hemagglutinin genes of H3N2 and H1N1 viruses were sequenced using gene-specific primers, as previously described [30] . Sequence data so far unpublished will be deposited in GenBank by the time of publication of the present article.
Three-dimensional models for the HAs of the viruses used in the present study were obtained by homology modelling, using the SWISS model web server [31, 32] , using, as templates, structures of matched subtype representatives deposited in the protein data bank (PDB) [33] . Hydrogens were added using VEGA-ZZ (University of Milan, Italy) [34] , and the structures were then visualised using the Swiss PDB Viewer (SPDBV) program (Swiss Institute of Bioinformatics) [31] .
The Coulomb potential was mapped to the protein surface by use of SPDBV using the default relative dielectric constant (solvent = water) of 80. Further information on the algorithm adopted by the program is available in the detailed online description of the program [31] .
|
The response to the global SARS-CoV-2 pandemic culminated in mandatory isolation throughout the world, with nation-wide confinement orders issued to decrease viral spread.
These drastic measures were successful in "flattening the curve" and maintaining the prior rate of COVID-19 infections and deaths. To date, the effects of the COVID-19 pandemic on neurotrauma has not been reported.
We retrospectively analyzed hospital admissions from Ryder Trauma Center at Jackson Memorial Hospital, during the months of March and April from 2016-2020. Specifically, we identified all patients who had cranial neuro trauma consisting of traumatic brain injury (TBI) and/or skull fractures, as well as spinal neuro trauma consisting of vertebral fractures and/or spinal cord injury (SCI). We then performed chart review to determine mechanism of injury and if emergent surgical intervention was required.
Compared to previous years, we saw a significant decline in the number of neuro-traumas during the pandemic, with a 62% decline after the lockdown began. The number of emergent neurotrauma surgical cases also significantly decreased by 84% in the month of April. Interestingly, while the number of vehicular traumas decreased by 77%, there was a significant 100% increase in the number of gunshot wounds.
As of June 1, 2020 the global incidence of COVID-19 was 6.05 million confirmed cases, with 371,000 related deaths. The U.S. had a major proportion of infections with 1.7 million confirmed cases and 102,000 related deaths. 1 Specific to our institution, Miami-Dade county had 18,139 confirmed cases with 702 associated deaths. The first case of COVID-19 in Miami-Dade county was confirmed on March 12, 2020, nearly 50 days after the initial case in the U.S., and 8 days after the initial case in Florida. 2 A subsequent state-wide closing of restaurants and bars on March 17, 2020 was then implemented to decrease viral spread. However, after a significant rise in infection rate over the next few weeks, the Governor of Florida issued an executive "stay-athome" order on April 1, 2020. 3 While the rate of viral spread improved, we also saw a decrease in both the number of accidents causing traumatic injuries and the number of emergent surgical procedures, secondary to a decline in both foot and automobile traffic. 4
After obtaining approval for this retrospective study from the University of Miami Institutional Review Board (IRB), we queried the registry at Ryder Trauma Center to obtain a list of patients from 2016-2020 who sustained neuro-trauma during the timeframe of March 1 to April 30.
Neuro-trauma was defined as patients with TBI, skull fractures, SCI, and vertebral fractures).
Chart review was then performed to obtain variables such as age, sex, mechanism of injury, type of injury, and need for emergent surgery. Mechanisms of injury included assaults, bicycle accidents, ground level falls (sitting, standing), falls from height (ladder, roof, multiple stories),
Overall, we found a significant difference in the average number of monthly neuro-trauma consults from 2016-2019, with 83.5 ± 4.7 in March and 68.0 ± 8.8 in April (p = 0.048, student ttest). However, in March 2020 we saw a 20% decrease in total neuro-trauma consults, which was significantly lower (p = 0.036, Poisson analysis) than previous years ( Figure 1 ). This declining trend continued in April 2020, with the number of neuro-trauma consults decreasing significantly by 62% (p = 0.0001, Poisson analysis), after a state wide "stay-at-home" order was issued on to prior year averages, the relative proportion of each mechanism did not. As expected, the proportion of motorcycle collisions (MCC), motor-vehicle collisions (MVC), and bicycle accidents decreased by 4%, 10% and 3% respectively. Additionally, the proportion of ground level falls resulting in neuro-trauma decreased by 6%, while the proportion of falls from height increased by 6%. There was also non-significant 2% increase in the proportion of assaults.
Surprisingly, there was a 6% increase in the proportion of pedestrians hit by cars (PHBC) and a 12% significant increase (p = 0.034, chi-squared proportion analysis) in the proportion of gunshot wounds (GSW). There were no traumatic injuries caused by "other" mechanisms during the pandemic.
In the U.S., traumatic unintentional injuries are the leading cause of death in people less than 45 years old, and the third leading cause of death among all age groups combined. 5 An estimated J o u r n a l P r e -p r o o f 1.7 million people sustain traumatic brain injury (TBI) annually, with approximately 52,000 deaths. 6 Additionally, while not typically life threatening, an estimated 18,000 people sustain spinal cord injury (SCI) annually. 7 Importantly, the effects of the SARS-CoV-2 pandemic on the incidence of neuro-trauma has yet to be reported. While several hospitals found a decreasing trend in general trauma admissions from February to April 2020, none of these studies thoroughly evaluated the effects on emergent operative cases or changes in mechanisms of injury. [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] Here we found that the average number of neuro-trauma consults differed significantly between March and April, likely secondary to South Florida being a destination for Spring Break causing an influx of vacationers during that timeframe. During the pandemic however, travel restrictions in combination with less foot and vehicle traffic, lead to a decrease in all mechanisms of injury, except for GSW. Upon further investigation we found that the relative proportion of mechanisms of injury also changed after the lockdown in April 2020. With fewer citizens commuting on the streets, the proportion of vehicular trauma decreased as expected. Decreases in these types of traumas have been reported across the country, however not in correlation with specific events such as initiation of lockdown protocols. 8, 9, 12, 13 The proportion of ground level falls also decreased, however this may have been secondary to patients unwilling to take the risk of going to the emergency room after minor accidents for concern of contracting the virus. 18 Importantly, some businesses were deemed "essential" and allowed to continue operating, which some construction companies took advantage of. 1 This may explain the increase in the proportion of falls from height, in addition to people doing home repairs while "stay-at-home" orders were in place. With respect to increases in the proportions of assaults and GSWs during the pandemic, they may be secondary to the psychosocial effects of mandatory isolation. 19-20 Family and friends were forced to be in close proximity to one another which had the potential to ignite conflicts and violence leading to assault. 21 Finally, with prolonged confinement comes an increase in the risk of suicide, which may explain the increase in presumed self-inflicted PHBC and GSW. 22
While are results are compelling, there are several limitations that could be affecting the results of this study. For instance, ambulances may have avoided our hospital which had a high COVID census, and primary care physicians may have treated minor traumas rather than referring patients to the emergency room. These confounding variables are difficult to address during the pandemic and must be taken into account when referencing this observational study.
|
As the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) pandemic continues to expand, healthcare resources globally have been spread thin. Now, the disease is rapidly spreading across South America, with deadly consequences in areas with already weakened public health systems. [1] . The Amazon region is particularly susceptible to the widespread devastation from Coronavirus disease 2019 (COVID- 19) , because of its immunologically fragile native Amerindian inhabitants, epidemiologic vulnerabilities due to remoteness, lack of infrastructure, habitat destruction from deforestation and mining, and, ever-present risk for introduction and emergence of new pathogens. [2] [3] For the Indigenous peoples of the Amazon, whose numbers have been declining, any death represents a threat to the survival of the tribe. The introduction of novel infectious diseases may be the most devastating consequence of European colonization into the Americas, particularly in native Indigenous communities, [2] commonly resulting in severe clinical outcomes, increased mortality, and social unrest. The small Indigenous communities in Amazonia (defined as the Amazon River basin and Guiana Shield), have been especially vulnerable to these "virgin soil epidemics," not only at the time of initial European colonization but during subsequent encounters with settlers and incursions by global industrial interests. [2] The particular vulnerability of New World Indigenous populations, especially Amazonians, is thought due to the high genetic homozygosity that has been proposed to be the consequence of a serial founder effect, compounded by successive generations of inbreeding. [2] The lack of genetic diversity has been shown to promote devastating epidemics of genetically similar hosts for pathogens including malaria, tuberculosis, HIV, and leprosy. [2] In addition to their biological vulnerability, Indigenous groups are further affected by extensive rooted socioeconomic disadvantages, which continue to accelerate in a uniform pattern across countries, translating into the worst health and developmental indicators within nations. [4] In this setting, the reemergence of measles in Venezuela and Brazil has had dire consequences with large numbers of deaths among Yanomami and other Indigenous populations, [5] [6] and, now, the SARS-CoV-2 pandemic has the potential to become an extinction event for many of the remaining tribes.
The number of SARS-CoV-2 infections have increased rapidly throughout South America, following the diagnosis of the first COVID-19 case on February 25, 2020, in Sao Paolo, Brazil, and the first reported COVID-19 death, on March 7, 2020, in Argentina. Despite implementation of containment measures, including quarantine and lockdown by several countries, by March 20, 2020, COVID-19 cases had been reported by all South American countries, and according to the World Health Organization, by July 23, 2020, there had been more than 3,300,000 SARS-CoV-2 infections and 121,000 deaths. [7] The 9 countries (Bolivia, Brazil, Colombia, Ecuador, Guiana, French Guiana, Peru, Surinam, and Venezuela) of the Pan-Amazonian region accounted for most of these cases and deaths, led by Brazil (2,227,514 cases and 82,771 deaths), [8] Ecuador (78,148 cases and 5,439 deaths), [9] and Colombia (226,373 cases and more than 7,680 deaths). [10] Although currently there is insufficient data to make confident conclusions regarding the influence of climatic factors such as temperature and humidity on transmission of SARS-CoV-2 [1], we discuss several characteristics for each of the diverse environments within the Amazon region that may enhance transmission.
Within Brazil, the state of Amazonas has felt the greatest impact thus far. Amazonas is the Brazilian state with the largest number of native Indigenous people, from the more populous tribes of the Tikuna and Yanomami people as well as from smaller tribes, some of which, like the Akuntsu people, have only a few remaining members. In Manaus, the capital city of Amazonas, the convergence of a particularly vulnerable population, a new pathogen and limited healthcare facilities and resources created a "perfect storm," evidenced by the dramatic rise in confirmed SARS-CoV-2 infections, which jumped from 67 to 32,496 cases (more than 1,000%) between March 26 and July 23 ( Fig 1A; Table 1 ) [11] [12] . In Manaus, there are 293 hospital beds (private and public) and 8 ambulances, to serve its approximately 2 million inhabitants. According to the State Health Secretariat, all intensive care unit (ICU) beds were occupied on May 1; there are no available ICU beds in hospitals outside Manaus [13] . The lethality rate in Manaus is the highest in Brazil (6.07% by July 23; Fig 1B) , necessitating the use of mass graves to accommodate an average of 180 burials daily. [12] Despite a current downward trend in hospitalizations, Manaus continues to have the most confirmed SARS-CoV-2 cases, followed by other Amazonian municipalities such as Coari, Manacapuru, and São Gabriel da Cachoeira [12] . The convergence of urban and suburban overcrowding, poor water and sanitation systems and delayed government response are factors contributing to the recent upsurge in cases. In addition, the close contact between families and communities in Amerindian settings, or during riverboat transportation, further fuels the epidemiological dynamics of COVID-19.
A similarly disastrous situation is occurring in the Amazonas department of Colombia. Leticia, the capital of the Amazonas department, which is a city of 80,000 inhabitants (the majority of whom are Indigenous peoples), reported its first COVID-19 case on April 17. By July 23, 2020, the number of cases had jumped to 2,335 cases with 98 deaths, representing the highest number of COVID-19 cases per capita in Colombia ( Fig 1A) [10] . The Colombian Amazon region, as in Brazil, is home to a large Indigenous population, potentially highly vulnerable to rapid expansion of the virus. Among the 54 Indigenous ethnic groups, approximately 398,365 Indigenous families are in risk, most confirmed SARS-CoV-2 infections have been in the Zenú, Tikuna, Mokaná, Uitoto, Los Pastos, and Pijao ethnic groups; the several deaths pose a risk for these shrinking communities [14] . Leticia and its populace are also at risk for directional spread from the border city of Tabatinga, Brazil, which has had about 1,629 COVID-19 cases and 78 deaths to date (Fig 1A and 1B ; Table 1 ) [12] . Here again, resources to fight this pandemic are severely limited. There are only 2 hospitals in Leticia city, totaling 70 beds. Only 1 of the 2 has an ICU, with 23 beds, all currently occupied [15] . [29] in cases per 100,000 people is shown in a spectrum of white (zero incidence) to dark red (more than 200 cases per 100,000). (B) Lethality [29] rates per 1,000 infections is shown in a spectrum of white (zero deaths) to dark blue (104 to 125 deaths). The Indigenous territories are indicated in light green. [
The 6 provinces (Sucumbíos, Orellana, Napo, Pastaza, Morona Santiago, and Zamora Chinchipe) in the Amazon region of Ecuador have the lowest population density in the country, with 12% of the total population but with only 10 ICUs and 19 critical care beds [16] . Nine Indigenous ethnic groups are registered in the region, including 7 crossborder, and 2 uncontacted or voluntarily isolated ethnic groups. Colonists (mestizos) and the Kichwa ethnic group who have migrated from the highlands since the 1950s, are the largest part of the population, outnumbering the ethnic groups that originally inhabited the area. The first apparent case of COVID-19 in Ecuador was reported on February 29, in a traveler returning from Spain, and the first case in Ecuador's Amazon region on March 7, in a Dutch tourist [17] . By July 23, 2020, there were 78,148 cases registered in Ecuador, mostly in the Costa and the Sierra, although this likely reflects underreporting due to limited resources, for diagnosis and testing ( Fig 1A; Table 1 ) [9] . Official statistics do not differentiate cases between settlers and Indigenous ethnic group; cases affecting Indigenous people are mostly based on clinical findings, with few confirmed by rapid tests. In late April and early May, 2 deaths and 14 COVID-19 cases confirmed by rapid tests were also reported in the Secoya (Siekopai) community in Sucumbios, a crossborder (Peru) community with fewer than 1,000 inhabitants [18] . As of mid-July, 699 cases have been reported among Kichwas in the province of Napo, the highest number thus far among Indigenous ethnic groups [19] . Besides, the Waorani community, one of the most homozygous and with only about 2,000 members, reported 297 cases by mid-July, and, also, 22 cases were reported among the Achuar in the Pastaza province [19] . Scarcity of resources severely hampers efforts to contain the epidemic and limit spread to other vulnerable ethnic groups in the region.
The COVID-19 catastrophe in the Peruvian Amazon has already begun in Iquitos, the capital of the Loreto region. Loreto is home of the greatest number of Amazon Indigenous communities in Peru. The lack of basic health resources, such as supplemental oxygen, has been a major limitation for treatment of SARS-CoV-2 infected patients, even among those who do not require intubation. As a consequence, Loreto has one of the highest estimated case fatality rates in Peru, 5.1% compared to 4.7% in the rest of the country (Fig 1B) [20] . Although reports of cases nation-wide have not included ethnicity, a few regions have begun to include this information in order to better understand the impact of the COVID-19 epidemic on Indigenous people. A report from the Ucayali Amazon region indicated that of a total of 3,525 SARS-CoV-2 positive cases, 41 were members of the Shipibo Indigenous people [21] . In the Alto Amazonas Province, a report from the Catholic church indicated that of a total of 272 confirmed cases, 3 patients were Kukama and 2 were Shawi people [22] . According to the last Peruvian National census, only 865 (32%) of a total of 2,703 native communities had a health post, and virtually none had facilities to admit patients [23] . Therefore, the urgent need to improve health resources in the midst of the COVID-19 pandemic represents one of the greatest challenges to protect the survival of Indigenous people. Although some Indigenous people may be able to self-isolate in the forest, natural resource degradation affecting food and water quality in Peruvian Amazon [24] and the increasing dependency of Indigenous households on the market to access cash and food supplies have made mobility and self-isolation more difficult [25] . Protection of the natural environment and provision of food aid are of primary importance to enhance the viability of self-isolation as a strategy for Indigenous communities.
The onset of the COVID-19 epidemic in French Guiana, a French overseas department located between Suriname and the Brazilian state of Amapá, lagged its onset in mainland France by several weeks. The early implementation by the French government of a strict lockdown, simultaneously in mainland France and French Guiana, enabled containment of the epidemic at an early stage. Hence, 6,654 cases and 37 deaths had been registered as of mid-July (Fig 1A and 1B ; Table 1 ) [26] . Many of the earliest cases reported were in travelers from mainland France and their contacts, often in small intrafamilial clusters. One large community cluster reported at the beginning of the epidemic occurred in an Amerindian community located close to the capital city, in which 21 cases were detected in an extended family group of 60 people, living in a village of approximately 250 inhabitants [26] . The chains of transmission in that community were stopped by active early screening and rapid isolation of the village.
Despite the early lockdown and apparent containment, a recent surge in cases has been observed, following lifting of the lockdown on May 11. Notably, clusters have arisen in several Amerindian communities on the Oyapock river, in relation with cases imported from bordering Brazil [26] . This surge may have been favored by the difficulties of food insecurity, forcing the community to seek supplies in Brazil, and exacerbated by a sociocultural context that mixes precariousness, frequent contacts with the extended family, and, sometimes, low cultural acceptance of lockdown measures. Furthermore, the remote Amerindian communities on the upper Oyapock river and the upper Maroni river bordering Suriname are at increased risk due to the movement of illegal gold miners whose activity has intensified since the onset of the epidemic.
Even before the COVID-19 pandemic, Venezuela had suffered a massive resurgence in both vaccine-preventable infections, especially measles, and neglected tropical diseases, due to extreme socioeconomic instability and lapses in public health control measures. Now, strict censoring of epidemiological data has limited the availability of direct information about the course and impact of the COVID-19 epidemic in Venezuela. However, the highly mobile population and permeable borders between Venezuela and its neighbors, Colombia and Brazil, act as a disease corridor between countries and provide a window into the status of the outbreak in Venezuela [27] . Mobilization through the borders has increased dramatically in conjunction with the widely reported increase in illegal mining activities in the Venezuelan Amazon. COVID-19 cases have already been reported in the 3 states of the Guyana shield region, first in the Bolivar State, specifically in the municipalities of Gran Sabana and now reported in the Heres and Caroní municipalities. Currently, in the Amazon State, there have been 36 cases reported and 33 in the Delta Amacuro State; none are recognized as Indigenous (Fig 1A; Table 1 ) [28] . Although the Venezuelan government usually does not report cases differentiated by Indigenous or nonindigenous peoples or Indigenous tribes, it is known that 146 of the diagnosed cases are Indigenous, occurring in 6 Yeral, 3 Kurripakos, 127 Pemon, 4 Warao, and 6 without ethnic identification, all of whom are associated with Brazilian cases mainly from São Gabriel de Cachoeira municipality [29] . They were immediately put into quarantine and have been receiving medical assistance. Nevertheless, travel through the Venezuelan-Brazilian border and the Venezuelan-Colombian border through illegal paths is known to be an ongoing concern that has not been addressed by the government. Furthermore, some COVID-19 cases have been diagnosed in the Yanomami and Warao people who reside across the Brazilian-Venezuelan border; the Wayuu Indigenous people on the Colombia-Venezuelan border are similarly vulnerable [29] .
Facing the worst humanitarian refugee crisis ever witnessed in the Western hemisphere and a total breakdown of its healthcare and public health systems, Venezuela will not only be unable to face the challenges of this ongoing pandemic but, also, could dangerously serve as a disease-amplifier, also putting at risk the already vulnerable healthcare systems of its neighbors.
Although many countries and regions around the world have implemented lockdown policies to halt viral transmission, such an approach may prove ineffective given the unique geographic idiosyncrasy of the Amazon. Most areas of the vast rainforest rely on river transportation for commerce and as the main form of regional transport. With myriad labyrinthine waterways, lockdown measures to contain population movement are nearly impossible.
Measures to address the imminent effects of the arrival of SARS-CoV-2 in the Amazon region and the potentially devastating impact on the most vulnerable Amerindian aboriginal settlements must include: intensified epidemiologic surveillance and enhanced reporting to track spread of the virus; mobilization of necessary resources for healthcare delivery; establishment of rapid response systems to ensure food security and resources to the hardesthit areas; marshaling deployment of specialized field healthcare personnel and all necessary resources to provide onsite testing and critical care to affected patients; and mobilization of necessary security forces to restrict any illegal foreign incursions to Indigenous territories, particularly illegal miners. Regional and global efforts will be required to ensure that these vulnerable populations receive timely access to new COVID-19 vaccines and therapies as they become available.
Although governments must develop nationwide strategies to mitigate the effects of the SARS-CoV2 pandemic, the particular vulnerability of native Indigenous peoples, and potential for loss of entire communities highlights the need for special attention. Recently, the Office of the United Nations High Commissioner for Human Rights issued a statement declaring that "Indigenous people will face extreme risks." As many Amazonian countries rapidly become epicenters of COVID-19, governments should take timely and decisive actions to tackle this potentially overwhelming ethnic crisis.
Together, the measles epidemic and COVID-19 pandemic represent a "one-two punch" that might accelerate catastrophic declines to the public health of Indigenous populations in the region. Further spread of the latest COVID-19 epidemic wave could prove devastating for many Amerindian people living in the Amazon rainforest, ultimately pushing these communities towards extinction.
|
Entry of SARS coronavirus (SARS-CoV) into target cells is mediated by binding of the viral spike (S) protein to the receptor molecules, angiotensinconverting-enzyme 2 (ACE2) (1). Studies on SARS using infectious SARS-CoV require care because of the highly pathogenic nature of this virus, so an alternative methodology is needed. Recently, pseudotyped retrovirus particles bearing SARS-CoV S protein have been generated by several laboratories (2-4) . These pseudotyped viruses have been shown to have a cell tropism identical to authentic SARS-CoV and their infectivity is dependent on ACE2, indicating that the infection is mediated solely by SARS-CoV S protein. Pseudotyped viruses have proved to be a safe viral entry model because of an inability to produce infectious progeny virus. A quantitative assay of pseudovirus infection could facilitate the research on SARS-CoV entry, cell tropism, and neutralization antibody.
Another pseudotyping system with a vesicular stomatitis virus (VSV) particle was previously reported to produce pseudotypes of envelope glycoprotein of several RNA viruses (i.e., measles virus, hantavirus, Ebola virus, and hepatitis C virus) (5-8). This system (VSV⌬G*/GFP system) may be useful for research on envelope glycoprotein owing to its ability to grow high titers in a variety of cell lines. The pseudotype virus titer obtained from the VSV⌬G*/GFP system (>10 5 infectious units (IU)/ml) is generally higher than that of the pseudotyped retrovirus system (6). Furthermore, infection of pseudotyped VSV in target cells can be detected as GFP-positive cells within 16 h postinfection (hpi) because of the powerful GFP-expression in the VSV⌬G*/GFP system (6). In contrast, the time required for the pseudotyped retrovirus system to detect infection is 48 hpi (9,10), which is similar to that for the SARS-CoV to replicate to the level of producing plaques or cytopathic effects on infected cells. Thus, pseudotyping of SARS-CoV S protein using the VSV⌬G*/GFP system may have greater advantages than retrovirus pseudotypes for studying the function of SARS-CoV S protein as well as for developing a rapid system for detection of neutralizing antibody specific for SARS-CoV.
Here we describe protocols for introducing SARS-CoV-S protein into VSV particles using the VSV⌬G*/GFP system. The infection of VSV pseudotype bearing SARS-CoV S protein (VSV-SARS-St19/GFP) was easily detected in target cells as an expression of the GFP protein. The following methods were originally designed to produce VSV-SARS-St19/GFP and to measure the infection efficiency. In addition to a significant advantage of the VSV-SARS-St19/GFP for safe and rapid analyses of infection, the VSV⌬G*/SEAP system, in which the G gene is replaced with the secreted alkaline phosphatase (SEAP) gene, may be superior to high-throughput quantitative analysis of S-mediated cell entry. The protocol for analyzing pseudotypes using the VSV⌬G*/SEAP system is also described briefly.
A schematic description of the production of VSV pseudotype is shown in Fig. 1. All viral components except the G protein will be supplied by these viruses. (4) Virus assembly, budding, and pseudotyping. Translated viral proteins are assembled and viral particles bud from the plasma membrane. Since the S protein, which was provided by the expression plasmid, is expressed on the cell surface, a virus can incorporate it into the virus particle.
The infectivity of VSV-SARS-St19/GFP, harboring the VSV⌬G*/GFP genome, can be determined as the number of GFP-positive cells. 1 . Mix serially diluted VSV-SARS-St19/GFP with DMEM-5%FCS and inoculate the mixture into Vero E6 cells seeded in 96-well culture plates. 2. Incubate the cells in a CO 2 incubator for 7 h. Longer incubation (overnight to 24 h) may be beneficial for clearer fluorescent intensities on GFP-positive cells. 3. Detect GFP-positive cells using fluorescent microscopy (Fig. 2) . 4 . Determine the IU of the pseudotype. The IU is defined as a virus titer endpoint determined by limiting dilution. 4. Incubate the cells in a CO 2 incubator overnight. 5. Determination of SEAP activity is performed by a specific SEAP assay kit. Several kits from different manufacturers are now available, and since manufacturers' protocol details vary, only a general description follows based on the Toyobo kit we have used.
The technique uses 96-well plates. Twenty l of supernatants from cell cultures are added to the wells with an equal volume of endogenous alkaline phosphatase inhibitor. Incubate the mixture at 37 • C for 30 min. Next, add 160 l of chemiluminescent substrate to the mixture. An incubation of 30 min at 37 • C allows the chemiluminescence reaction. Use a luminescence microplate reader to detect the chemiluminescence signal.
1. To generate high virus titer of VSV pseudotype, prepare the plasmid encoding a C-terminal truncated version of the S protein, because it has been shown that truncation of C-terminal 19 amino acids leads to efficient incorporation of S protein into the VSV particles, and then the VSV pseudotype shows an efficient infection to target cells (11). 2. The VSV⌬G*/GFP-G is a VSV-G protein-bearing VSV pseudotype in which VSV-G gene is replaced with GFP gene. The VSV⌬G*/SEAP-G is same as VSV⌬G*/GFP-G except for having SEAP gene instead of the GFP gene.
Pseudotypes having VSV-G protein are used as "seed" viruses for generating S protein bearing VSV pseudotypes. Since the VSV⌬G*/GFP or VSV⌬G*/SEAP system was developed by Prof. M. A. Whitt (University of Tennessee Health Science Center, TN), ask him for sharing and using the system when starting experiments. 3. As the SEAP gene is obtained by modification of human placental alkaline phosphatase gene, some cell lines derived from placenta that express alkaline phosphatase similar to SEAP should not be used as target cells. For the analysis of VSV-SARS-St19/SEAP infection, we suggest using Vero E6 cells that show a very low level of alkaline phosphatase activity in the culture supernatant. 4. The medium containing VSV-SARS-St19/SEAP may have strong SEAP activity since it is derived from culture medium of 293T cells inoculated VSV⌬G*/ SEAP-G (see steps 3-6 in Section 3.1). In order to remove carryover SEAP activities derived from VSV⌬G*/SEAP-G the Vero E6 cells have to be washed with PBS at least three times.
|
With the outbreak of COVID-19, maintaining the healthcare system is a crucial issue. In Japan, the number of COVID-19 cases is increasing rapidly day by day with a risk of overshooting initial estimations (WHO, 2020a) . Public health nurses (PHNs) working in public health centres in prefectures and designated centres in cities or core cities, play a critical role in controlling COVID-19 (Yoshioka-Maeda, Honda, & Iwasaki-Motegi, 2020) . Providing care for COVID-19 patients, their families, and the community, the workload of PHNs has been reaching the maximum limit. The World Health Organization (WHO) released an interim guide "Operational considerations for case management of COVID-19 in a health facility and community" on 19 March 2020 (WHO, 2020b) . This study aims to focus on resolutions to prevent the dysfunction of public health centres developed by PHNs regarding COVID-19 in Japan, as well as task sharing and securing staff, and task shifting.
When clusters of COVID-19 cases emerge, task sharing and securing staff are among the first strategies to ensure the sustainable provision of health services. By identifying core health services, service delivery systems and staff allocations can be modified (WHO, 2020b) . There are two ways of securing staff: getting support from outside the organization, and from within the organization. In an earthquake, the national government coordinates and sends PHNs to the earthquake-stricken area. However, COVID-19 is spreading nationwide. Local governments cannot send PHNs to another area. To source support from outside the organization, hiring part-time PHNs, nurses, re-employment, and securing volunteers from faculty members and graduate students in the community are popular strategies to secure additional staff.
In the case of getting support from within the organization, chief
At first, a telephone consultation system was set up in each public health centre to answer people's questions regarding COVID-19 (Japan Ministry of Health, Labour, & Welfare, 2020) . However, individual public health centres faced difficulties responding to the vast number of questions. Outsourcing is an example of task shifting (Henderson, Willis, Toffoli, Hamilton, & Blackman, 2016) . To reduce the total number of telephone consultations, PHNs tried to outsource the telephone consultation system to local medical and nursing associations.
Additionally, each public health centre assessed the need for polymerize chain reaction (PCR) testing based on these telephone consultations. As consultations regarding the PCR testing increased, they could not respond fast enough and those suspected of being infected with COVID-19 were kept waiting. PHNs repeatedly called on hospitals to conduct PCR testing of suspected cases. To delay the spread, slow down and stop the transmission of COVID-19, the PCR testing system had to be refined quickly. Collaborating with local medical associations, the PCR testing shifted to local hospitals as a vital hub.
Please note that editorials have not been externally peer reviewed
|
other approaches to control or cure? One possibility is to try to publish a disease to death, a therapy strategy first proposed by my late colleague Prof. David Golde from UCLA (see below). Here I consider whether this strategy is working in the fight against severe acute respiratory syndrome-cornavirsu-2 (SARS-CoV-2) pandemic and the associated coronavirus infectious disease-2019 (COVID-19).
To test this hypothesis I queried PubMed on 16 May, 2020 for citations using the search terms SARS-CoV-2 and/or COVID-19. There were 12,959 hits since January, 2020 or roughly 162 citations per day. I confirmed this by comparing this number with a similar PubMed search I did on 14 May, 2020. The difference of 484 citations is consistent with a recent publication rate of 220 per day. The grim fact is that this number equates to numbers of deaths from COVID-19 in the UK in that time. This is only for citations covered by PubMed. The figures from the World Health Organization which tracks every manuscript on the virus and its disease submitted in their journals irrespective of publication would be much greater [1] How to explain this burst of publications? Can many high-quality studies be done so quickly? Unlikely. In fact, of 1556 studies of COVID-19 listed in Clinicaltrails.gov, [1] only 249 (16%) were phase-3 trials and fewer than 100 included more than 100 subjects. Given
This article is protected by copyright. All rights reserved the baseline estimate 85 percent of clinical research is not useful or wrong we may be pushing this estimate to 90 or 95 percent this year. [2, 3] One explanation of this publications deluge is the opportunity the pandemic offers authors and journals. Some journals (but not BJH) have lowered their criteria for acceptance. Their options: help with online schooling, cook dinner, vacuum (Dyson V7 highly recommended) or hide in your (newly designated) home office and complete a long-delayed typescript. The choice between publish or perish has never been starker.
I also considered that many if not most of this surge of publications are from Chinese authors. Figure 1 shows data on numbers of publication by geographic region and country. My next step was to evaluate the quality of these guidelines using criteria of the Infectious Diseases Society of America (Figure 2 ; [12] ). Readers will not be surprised the four guidelines received a C for Strength of Recommendation (Poor evidence) and a III for Quality of Evidence (Evidence from opinions of respected authorities… without clinical trials data). Nevertheless, recommendations in these guidelines, although not evidence-based, seem sensible and may be useful. The risk is that they will be awarded the imprimature of delivering quality health care in the absence of anything better. importantly, what is not) and we can be better prepared for the next coronavirus pandemic.
|
diagnose acute infection. Detecting antibodies that are generated by hosts against the virus can tell the history of a past infection and gained immunity to the disease. Selecting specific targets would determine the proper assay formats and technologies ( Table 1 ) that are detailed in the following sections. Up-to-date information on FDA-cleared commercial tests is available at https://csb.mgh.harvard.edu/covid.
NAATs for COVID-19 diagnostics are designed to detect unique viral RNA sequences in N, E, S or RNA-dependent RNA polymerase (RdRp) genes. The viral genome of original SARS-CoV-2 was sequenced and released in January 2020 (Wuhan-Hu-1, GenBank: MN908947.3), enabling fast development of COVID-19 NAATs. Since then different strains have been sequenced many times providing i) a clearer picture of mutations and conserved sites and ii) global evolution of different strains. Current NAAT primers and reagents are developed based on this information. NAATs offer a high accuracy; after taking samples and transporting them to laboratories, results are typically obtained within a couple of hours with a limit of detection down to 0.02 copy/µL (Suo et al., 2020) . As such, NAATs are recommended for acute disease detection even when the patients have mild or nonspecific symptoms (e.g., fever, cough). A number of different NAATs are available for COVID-19 diagnosis.
Sample collection and storage is an important pre-analytical factor affecting the overall assay performance. The US-CDC guidelines list upper and lower respiratory specimens, such as nasopharyngeal (NP) or oropharyngeal (OP) swabs, sputum, lower respiratory tract aspirates, bronchoalveolar lavage, and nasopharyngeal wash/aspirate or nasal aspirate (CDC, 2020a) . Alternative sources include saliva , anal swabs , urine and stool (Xie et al., 2020a) , tears and conjunctival secretions . For initial diagnostics, the US-CDC recommends collecting an upper respiratory specimen, prioritizing the NP swab, although OP swabs remain an acceptable specimen type (CDC, 2020b) .
Swabs are the most widely used tools for sample collection and considered as FDA Class I exempt medical devices. As for materials, synthetic fiber (e.g., nylon, polyester filaments) swabs with plastic shafts should be used. Calcium alginate swabs or swabs with wooden shafts should be avoided because they may contain substances that inactivate some viruses and can inhibit PCR testing (CDC, 2020b) . For high viral yields, sample collection with a flocked swab is preferred (Daley et al., 2006) . The rapid spread of COVID-19, however, has resulted in shortages of NP swabs due to unprecedented high demands. Responding this bottleneck, an open-development consortium set to develop 3D-printed NP swabs that can be mass-produced (Callahan et al., 2020) . The team tested different designs and materials and validated promising candidates in clinical trials. These efforts led to FDA registered test swabs with superior or equivalent efficacy to flocked swabs (Callahan et al., 2020) .
For specimen transport and storage, the swab material should be placed into a sterile tube filled with viral transport medium (VTM) and kept refrigerated (2 -8°C) for up to 72 hours after collection. If a delay in testing or shipping is expected, specimens should be stored at -70°C or below. The US-CDC and World Health Organization (WHO) recommended VTM is based on Hanks-balanced salt solution (HBSS) and contains heat-inactivated fetal bovine serum and antibiotics (gentamycin and amphotericin B). VTM shortage has also been experienced during the COVID-19 pandemic, impairing local and regional capacity for diagnosis (Radbel et al., 2020) . Radel et al. tested phosphate-buffered saline (PBS) as a potential alternative to VTM (Radbel et al., 2020) . Using clinical endotracheal secretion samples (n = 16), the authors evaluated the stability of the PCR signal from three viral targets (N, ORF1ab, and S genes) when samples were stored in these media at room temperature for up to 18 hours. The test results were similar between PBS and VTM-based storages, that may establish PBS as a cost-effective media for short-term preservation of specimens. However, further validations with NS swabs are needed.
Reverse transcription polymerase chain reaction (RT-PCR) was the first method developed for COVID-19 detection and is the current gold standard . WHO adopted its version of RT-PCR test and implemented it in different countries (Sohrabi et al., 2020) . In the US, the CDC developed its own standards (CDC, 2020a).
RT-PCR assay starts with extracting RNA from clinical specimens. Several commercial kits are recommended by US-CDC for this process (Table 3) . These kits are based on solid-phase extraction using silica substrates; negatively charged nucleic acids selectively bind to positively charged silica surface in the presence of chaotropic ions. Following wash steps, adsorbed nucleic acids are eluted with low salt solution. To remove DNA contamination, the eluate is treated with DNase, followed by heat treatment (15 min, 70°C) to inactivate the DNase. Using magnetic beads coated with silica can facilitate sample handling, eliminating the need for centrifugation. Several extraction platforms indeed employ magnetic actuation for automation of high throughput sample processing (Ali et al., 2017) . Next, extracted viral RNA is mixed with reagents containing target gene primers, probes, and RT-PCR master mix, and amplified. Depending on the probe design, PCR products can be detected during the amplification process (quantitative PCR, qPCR) or after its completion.
Analytical accuracy of COVID-19 RT-PCR relies primarily on the primer design. Due to high genomic similarity among different coronavirus species, identifying unique gene sequences is important to eliminate cross-reactivity. Viral targets are selected from E, N, S, and Orf1ab regions of SARS-CoV-2 genome (Figure 1b) , and human RNAse P (RP) is used for internal positive control (Jung et al., 2020) . Table 2 shows selected primer-probe sets announced by WHO. According to the initial comparison of these probes, US-CDC 2019-nCoV_N2, 2019-nCoV_N3, and Japanese NIID_2019-nCOV_N primer sets were highly sensitive for the N gene and the Chinese-CDC ORF1ab-panel for the ORF1ab gene (Jung et al., 2020) . The N3 assay manufactured by the US-CDC, however, encountered false positive issues and was removed from the US-CDC diagnostic panel in (CDC, 2020a . A study by the US-CDC showed that removing the N3 assay had negligible effects on sensitivity to detect SARS-CoV-2 (Lu et al., 2020c) . Targeting only N1 and N2 also simplified the overall test, increasing throughput and reducing cost (Lu et al., 2020c) . As more commercial and laboratory-developed RT-PCR tests become available, it is increasingly critical to evaluate their performance using a common standard. Foundation for Innovative New Diagnostics (FIND), in partnership with WHO, is now conducting independent evaluations of SARS-CoV-2 molecular tests (FIND, 2020) .
RT-PCR offers both high accuracy and throughput. The limit of detection (LOD) was reportedly down to 4-8 copy of virus upon amplification of Orf1ab, E, and N genes at 95% confidence intervals Han et al., 2020; Zou et al., 2020) , and multiple assays can be carried out in a parallel format (e.g., 384 well plate). Specificity of a test is enhanced by targeting multiple loci. Indeed, US-CDC diagnostic recommends the use of two targets (N1, N2) in N gene and RP as a control; WHO recommends E gene assay for first line screening and further confirming positive cases with RdRp gene assay. The metric for COVID-19 diagnosis is the cycle threshold (C t ). A C t value less than 40 is clinically reported as PCR positive. Viral RNA loads become detectable as early as day 1 of symptom onset and peak within a week. The positivity declines by week 3 and subsequently becomes undetectable (Sethuraman et al., 2020) . RT-PCR tests are usually performed in centralized laboratories due to the requirement of dedicated equipment, trained personnel, and stringent contamination control. Establishing efficient logistics for sample transfer and securing reagents are critical to minimize delays in assay turnaround. Proper sample preprocessing (e.g., sample collection, RNA extraction) is also key to reduce false negatives (Ai et al., 2020; Xie et al., 2020b) .
Digital PCR (dPCR) enables the absolute quantification of target nucleic acids. The method partitions samples into large numbers of small (~nanoliters) reaction volumes, ensuring that each partition contains a few or no target sequence per Poisson's statistics (Baker, 2012) . Following PCR, amplification-positive partitions are counted for quantification. Among various partitioning methods (e.g., microwell plates, capillaries, oil emulsion, miniaturized chambers), droplet digital PCR (ddPCR) is the most widely used method with commercial systems available (Hindson et al., 2013) . ddPCR has higher sensitivity (~10 -2 copy/µL) than that of conventional PCR, which makes it possible to detect very low viral loads. For example, when pharyngeal swab samples from convalescent COVID-19 patients were compared, dd-PCR detected viral RNA (Chinese CDC sequences) in 9 out of 14 (64.2%) RT-PCR negative samples . In another ddPCR application, researchers tracked treatment progress by analyzing clinical samples collected at different dates. ddPCR reported decrease in viral load as treatment proceed, whereas RT-PCR showed sporadic appearance of positive results. Viral loads of specimens collected from different locations of the same patient were compared as well: the load was the highest in pharyngeal samples, lower in stool samples, and the lowest in serum (Lu et al., 2020a) .
Applying isothermal amplification enabled the development of point-of-care (POC) COVID-19-NAATs. This amplification technique uses specialized DNA polymerases with the capacity of strand displacement; the polymerases can push their way in and unzip a double-strand DNA as they synthesize a complementary strand. Importantly, the reaction takes place at a fixed temperature, removing thermal cycling steps and thereby simplifying device design. Various isothermal amplification methods have been adapted to detect SARS-CoV-2 RNA targets Yu et al., 2020; Lu et al., 2020b; Zhu et al., 2020) . Analytical sensitivities of those isothermal amplification methods were shown to be comparable to that of RT-PCR, but with shorter assay time (<1 hour).
Isothermal NAATs have unique applications in point-of-care COVID-19 diagnostics, providing fast results without need for specialized equipment (Foo et al., 2020; Yan et al., 2020a) . Practical considerations however, still position RT-PCR as the principal method: i) RT-PCR has been a gold standard over decades and has a well-developed supply chain for reagents and equipment; ii) RT-PCR is simpler in the primer design and requires fewer additives, which brings down the cost per test; iii) in clinical laboratories where large batches of samples are processed, RT-PCR easily makes up for the speed advantage of isothermal NAATs; and iv) RT-PCR is license-free with most patents expired, whereas major isothermal NAATs are proprietary products.
LAMP uses 4 or 6 primers, targeting 6-8 regions in the genome, and Bsm DNA polymerase (Notomi et al., 2015) . As the reaction starts, pairs of primers generate a dumbbell-shaped DNA structure which subsequently functions as the LAMP initiator (Figure 2a) . The method can generate ~10 9 DNA copy within an hour and the reaction takes place at constant temperature between 60-65 °C (Ménová et al., 2013) . The enzyme is resistant to inhibitors in complex samples, making it possible to use native samples (blood, urine, or saliva) with minimal processing. LAMP reaction produces magnesium pyrophosphate as a by-product, which can be exploited for visual readout of the assay using metalsensitive indicators or pH-sensitive dyes. FDA-approved LAMP tests are already available for Salmonella and Cytomegalovirus detection (Yang et al., 2018; Schnepf et al., 2013) .
Designing primer sets is a key challenge when developing COVID19-LAMP assays, as multiple pairs of primers are required for a given target sequence and the melting temperature of these primers should match with the optimum working temperature of the DNA polymerase (Notomi et al., 2000) . Fortunately, online software (Primer Explorer V5) is available to facilitate the process (Chemical, 2020) .
Most studies reported primer sets targeting regions of SARS-CoV-2 ORF1a and N genes Lu et al., 2020b; Park et al., 2020; Yan et al., 2020a) . Using these sets, the typical assay run-time was ~1 hour and the limit of detection in the order of 10 copy/µL (Figure 2b ). Zhu et al. reported analytical sensitivity of 100% for 33 SARS-CoV-2 positive oropharynx swab samples and 96 SARS-CoV-2 negative samples (Zhu et al., 2020) . Importantly, the entire reaction could be performed in one pot (RT-LAMP) by using a master mix containing reverse transcriptase (e.g., NEB WarmStart Colorimetric LAMP 2X Master Mix). The total assay time, however, could be >1 hour (90 to 150 min) when manual sample handling steps are included (e.g., RNA extraction). Another drawback is the difficulty in multiplexing. With each target requiring 4 to 6 primers, increasing target numbers could easily complicate the primer design and the chance of primer-primer interactions.
NEAR uses both strand-displacement DNA polymerase (e.g., Bst polymerase) and nicking endonuclease enzymes to exponentially amplify short oligonucleotides (Wang et al., 2018b) . Figure3a shows the two-step working mechanism. First, nicking primers (P1, P2), each containing a restriction or nicking site, a stabilizing region, and a binding sequence, are mixed with a sample. Primer binding, displacement extension, and nicking action produce double-stranded DNA with restriction sites at both ends (NEAR amplification duplex; Figure 3a , left). Next, nicking enzymes cleave the restriction sites of the duplex, making two free-ended templates (T1, T2; Figure 3a , right) that are not stable due to elevated temperature (55 °C) and ready to dissociate (Ménová et al., 2013) . Each template undergoes repeated polymerization and single-strand cleavage, which results in the amplification of products (A1, A2). These products also hybridize with primers (A1-P2; A2-P1) and contribute to successive amplification in a bidirectional manner until the depletion of reaction mixture components. In this way, thousands of copies could be produced from one restriction side, which makes NEAR a unique technique with the highest amplification efficiency. However, NEAR is used less frequently than any other isothermal amplification methods mainly due to the formation of non-specific products. These products could be extended by polymerase as well and competes with the target sequence (Wang et al., 2018b) .
Abbott Laboratories adopted the NEAR technique and rolled out a compact, integrated diagnostic system, ID NOW (Figure 3b) . The system comes with a convenient cartridge for sample processing. Total hands-on time is 2 min and the total assay time <15 min. The company already have ID NOW tests for Group A Streptococcus and influenza on the market (Wang et al., 2018a; Nie et al., 2014) , which helped the rapid introduction of ID NOW COVID-19. The test was designed to detect a sequence in RdRp regions of SARS-CoV-2 genome, and the reported limit of detection was 0.125 copy/µL. The assay received FDA-EUA for COVID-19 diagnostics.
RPA borrows its concept from homologous DNA recombination to amplify double-stranded DNA (Lobato and O'Sullivan, 2018; Li et al., 2018) . In this process (Figure 4a ), primers first bind to recombinase to form nucleoprotein filaments. These complexes search for homologous sequences in the target DNA and invade the cognate sites. Subsequently, the recombinase disassembles nucleoprotein bonded strand and DNA polymerase executes the strand-displacing extension. During this process, the displaced strand is stabilized by single-stranded binding proteins, and the released recombinases become available to form new nucleoprotein filaments that will be used for further cycles. At the end of this process, double-stranded DNA target is exponentially duplicated.
RPA has been widely used for point-of-care infection diagnostics. RPA requires only a pair of primers like NEAR but can be carried out at lower temperature (37 -42 °C) and therefore more suitable for one-spot assay design (Ménová et al., 2013) . All RPA reagents are commercially available through TwistDx™ (a subsidiary of Abbott), even in a lyophilized pellet format. The company also supplies probe kits for different detection methods (e.g., gel electrophoresis, real-time fluorescent detection, lateral flow strip). Compared to LAMP, RPA is much faster (20 min) but might produce non-specific amplification due to simpler primer design.
For COVID-19 detection, Xia et al. designed RPA primers targeting regions of N gene . Typical RPA reagents were mixed with transcriptase and RNase inhibitor to enable one-spot RNA reverse transcription (Figure 4b) . Amplified targets were then detected using commercial fluorescent or lateral flow probe kits. The reaction time was about 30 min and the LOD was 0.2 copy/µL. The results, however, were limited to using synthetic RNA rather than analyzing extracted viral RNA samples.
Clustered regularly interspaced short palindromic repeats (CRISPR) systems offer new ways to amplify analytical signal with the precision down to single nucleotide variants (Kellner et al., 2019; Gootenberg et al., 2017; Aman et al., 2020) . Most advanced form of these assays use Cas12a (CRISPR-associated protein 12a) or Cas13a (CRISPR-associated protein 13a) enzymes, exploiting collateral cleavage of single stranded DNA (Cas12a) or RNA (Cas13a) by these nucleases. In one method, termed SHERLOCK (specific high-sensitivity enzymatic reporter unlocking) (Gootenberg et al., 2017) , RNA targets are first amplified via RT-RPA; and the amplified DNAs are transcribed to target RNA. CRISPR RNA (crRNA)-Cas13a complex then binds and cleaves target RNA. Non-target RNA probes conjugated with a fluorescent dye (F) and quencher (Q) pair are also cleaved by the complex to provide a fluorescent signal. Similarly, the DETECTR (DNA endonuclease-targeted CRISPR trans reporter) method uses a crRNA-Cas12a complex to recognize amplified DNA targets (Chen et al., 2018) . Binding of the crRNA-Cas12a complex to target DNA induces indiscriminate cleaving of non-target FQ-DNA reporters.
Broughton et al. applied the Cas12a method for COVID-19 detection (Figure 5a ). The assay was designed to detect regions in E and N genes of SARS-CoV-2, and human RNase P gene as a control. Target genes were amplified via RT-LAMP and recognized by crRNA-LbCas12a complex, which cut DNA reporter probes (Figure 5b) . Using synthetic in-vitro transcribed (IVT) SARS-CoV-2 RNA gene targets, the authors reported the limit of detection of 10 copy/µL. The assay was complete in 45 min and the analytical signal was read out with lateral flow strips (Broughton et al., 2020) . Metsky et al. designed a Cas13-based COVID-19 test (Metsky et al., 2020) . The study used machine learning algorithms to generate multiplex panels (67 assays) to identify SARS-related coronavirus species. The assay amplified target RNA via RT-RPA, which was then transcribed to RNA for recognition by crRNA-LwaCas13 conjugates. Several drawbacks, however, limit practical use of these assays. The reported methods still require nucleic acid amplification to achieve high sensitivity; CRISPR techniques offer a signal transduction mechanism after such amplification. The assays also involve extra hands-on processes. crRNA-Cas complexes need to be mixed separately and incubated (30 min, 37 °C) before each test, and amplified nucleic acids should be mixed with these complexes. In comparison, most isothermal NAATs for COVID-19 already offer one-pot amplification and detection. Overcoming these issues, Joung et al. introduced a one-step approach, SHERLOCK Testing in One Pot (STOP), which integrated LAMP amplification with CRISPR-mediated detection (Figure 5c ) (Joung et al., 2020) . The authors found that Cas12b from Alicyclobacillus acidiphilus (AapCas12b) retained sufficient activity in the same temperature range of LAMP. They further identified the optimal combination of primers and guide sequence, and screened 94 additives to improve thermal stability of the one pot reaction. After the assay, the signal was detected with lateral flow reporter devices. The reported LOD was about 2 copy/µL (N gene) using SARS-CoV-2 genome standards spiked into pooled healthy saliva or nasopharyngeal swabs. The assay was validated with clinical nasopharyngeal swab samples ( Figure 5d ); STOP correctly diagnosed 12 COVID-19 positive and 5 negative patients out of 3 replicates. The assay time was about 70 min using lateral flow readout.
Immunoassays detect the presence of virus-specific antigens or antibodies against virus (Figure 6a) . While NAATs are ideally suited to diagnose viral infection during its initial phase, immunoassays, particularly antibody tests can allow for the detection of ongoing or past infection, promoting the better understanding of the transmission dynamics. Immunoassays can also augment NAATs to reduce falsenegative results (Racine and Winslow, 2009; Louie et al., 2004) ; antigens and antibodies are more stable than RNA, therefore less susceptible to degradation during transport and storage.
These are typically blood-based tests to detect host-derived antibodies against virus. Previous SARS epidemics showed that viral-specific immunoglobulin M (IgM) appear within a week of infection, followed by the production of IgG for long-term (~2 years) immunity (Wu et al., 2007) . Immunological data for COVID-19 have yet to emerge, but a recent study on 214 patients (Hubei, China) indicated a similar early pattern: IgM positivity was higher than that of IgG during initial days of disease onset, and then dropped in about one month . Another study on 238 patients (Hubei, China) compared the positive rates of RT-PCR and serologic tests (Figure 6b ) (Liu et al., 2020a) . Antibody positive rates (IgG, IgM, or both) were 29.4% (5/17) in the first five days of symptom onset, and then increased to 81% (17/21) after day 10. Conversely, RT-PCR test had the initial positive rate of 75.9% (41/54), which dropped to 64.3% after day 11. These results point to the potential utility of serologic tests, not for diagnosing acute COVID-19, but rather as a wide screening tool. For example, by testing antibodies among the general public through random sampling (i.e., serosurvey), public health agencies can estimate the true size of infection (prevalence) and its fatality rate. Serologic tests could also be an assessment tool to decide whether individuals can return to social contacts.
Developing serologic tests critically relies on producing suitable viral antigens or recombinant proteins to capture host antibodies. Based on previous data on SARS-CoV, it is likely that S and N proteins would be the main immunogens among the four structural proteins (i.e., S, E, M, N proteins) Meyer et al., 2014) , but SARS-CoV-2 antigenic candidates should be evaluated for their specificity against most common human coronaviruses (HCoV-OC43, HCoV-HKU1, HCoV-229E, HCoV-NL63) and zoonotic ones (SARS-CoV, MERS-CoV). Okba et al. analyzed the similarity of S and N proteins among these coronaviruses, and found that S1 subunit in the SARS-CoV-2 S protein has the least overlap with other coronaviruses (Figure 6c ) . The authors further assessed N and S1 ELISA using serum samples from healthy donors as well as from patients infected with non-CoV, HCoV, MERS-CoV, SARS-CoV, or SARS-CoV-2 (Figure 6d) . S1 ELISA showed high specificity against healthy, non-CoV, HCoV, and MERS-CoV cohorts, whereas N ELISA was more sensitive in detecting antibodies from mild COVID-19 patients. Differentiating SARS-CoV-2 and SARS-CoV samples were not possible due to cross-reactivity. However, it was noted that the human population with SARS-CoV antibodies is expected to be small: SARS-CoV has not circulated since 2003, and a previous study reported the waning of SARS-CoV antibodies to an undetectable level (21 out of 23 samples) in 6 years after infection (Tang et al., 2011) . Considering these results, S1 and N proteins are likely the most suitable antigens for COVID-19 serologic tests.
RDTs are based on host antibody detection on a nitrocellulose membrane. Easy-to-operate and portable, these tests are suited for POC analyses of fingerprick blood, saliva, or nasal swab fluids. Samples are dropped on a loading pad and transferred via capillary motion. During this flow, antibodies in the sample bind to nanoparticles, and the whole complex are captured downstream at designated spots on the membrane by anti-human antibodies (Figure 7a) . The final results are usually displayed as colored lines for naked eye detection: a control line confirming test reliability, and test line(s) indicating the presence of target antibodies. Most assays use gold nanoparticles for signal generation, while carbon or colored latex nanoparticles are alternative labelling candidates.
Initial studies reported high analytical sensitivity (86 -89%) and specificity (84.2 -98.6%) of RDTs (Liu et al., 2020c; Li et al., 2020) . Test accuracies, however, vary significantly among different commercial vendors. As more companies are racing to develop serologic RDTs (>100 companies as of May, 2020), the need for rigorous vetting is increasing. FDA and FIND are currently conducting independent evaluation on selected products (FIND, 2020).
ELISA is a lab-based test with high sensitivity and throughput. It typically uses a multi-well plate coated with viral proteins. Blood, plasma or serum samples from patients are introduced to these wells for antibody capture and then washed. Subsequently, secondary antibodies labeled with enzymes are added, which catalyzes signal generation. The assay format can be adapted for different detection modalities, including colorimetric, fluorescent, and electrochemical methods (Figure 6a) . The analytical sensitivity is down to picomolar (pM) ranges, and the typical assay time is 2-5 hours (Weissleder et al., 2020; Younes et al., 2020) .
Like COVID-19 RDTs, identifying effective viral antigens is an important factor in ELISA development.
One study compared three ELISA sets, each using N protein, S1 subunit (S protein), or receptor binding domain (RBD; S protein) as a viral antigen . The RBD and N ELISA tests were shown to be more sensitive than S1 ELISA , but the cohort (n = 3 for COVID-19 infection) was too limited to draw conclusions. In another study, sera from 214 COVID-19 patients were subjected to N and S ELISAs to detect IgG and IgM (Liu et al., 2020a) . In this study, S-based ELISA showed higher sensitivity than N-based ELISA (Figure 7b ).
VNT is a gold standard to assess whether an individual has active antibodies against a target virus. In this assay, serial dilutions of test serum (or plasma) are prepared, typically in a 96 well plate, and incubated with a set amount of infectious virus. The mixture is then inoculated on to susceptible cells (e.g., VeroE6) and cultured for 2-3 days. The test results are typically read out via microscopy for evidence of viral cytopathic effect (CPE); neutralizing antibodies would block virus replication to let cells grow. Plaque reduction neutralization test (PRNT) as a type of VNT is used for counting of plaque forming units on the agar or carboxymethyl cellulose coated cell layer, while focus reduction neutralization test (FRNT) relies on immunocolourimetric-based analysis for calculating neutralizing antibody titers. Mehul et al. compared the efficiency of PRNT and FRNT assays for RBD-specific IgG responses that COVID-19 patients developed 6 days after the PCR diagnosis and found a strong correlation between those tests (Suthar et al., 2020) . In another study, Wang et al. used PRNT to evaluate the human monoclonal antibody, 47D11, that binds to S-RBD and can neutralize both SARS-CoV-2 and SARS-CoV . Although highly specific, VNT is time-intensive and require specialty laboratories (e.g., biosafety level 3 facilities for COVID-19). As such, these tests are primarily used for vaccine and therapeutic developments.
Several groups have developed pseudovirus-based neutralization assays (PBNAs) via pseudovirus (PSV) as a safer (biosafety level 2) surrogate to SARS-CoV-2 virus (Nie et al., 2020; Wu et al., 2020) . Wu et al. generated PSV by incorporating SARS-CoV-2 S protein into the envelop of vesicular stomatitis virus pseudotypes. These PSVs were used for VNTs with plasma samples from recovered COVID-19 patients . Convalescent plasma from COVID-19 patients inhibited SARS-CoV-2 infection (Figure 7c ) and did not cross-react with SARS-CoV pseudovirus. The study also showed that titers of neutralizing antibodies reached their peak at 10 to 15 days after disease onset and remained stable thereafter. Interestingly, about 30% of recovered patients (n = 175) showed low levels of neutralizing antibodies; this observation may have implications when applying and interpreting serologic tests to detect past COVID-19 infection.
This assay detects the presence of viral proteins (antigens) through a conventional immuno-capture format (Figure 6a) . Viral antigens can be detected when the virus is actively replicating, which makes this assay type highly specific. The assay, however, has a suboptimal sensitivity, generally requiring sufficient antigen concentrations in samples. Data from influenza antigen tests (Bruning et al., 2017) showed the sensitivity of 61% and the specificity 98%. Potential use of antigen assays thus could be a triage test to rapidly identify patients who are likely to have COVID-19, reducing or eliminating the need for lengthy molecular confirmatory tests. Monoclonal antibodies against the N protein of SARS-CoV-2 have been generated, and several rapid test kits are under development .
Aggressive testing and isolation measures have started blunting the first wave of COVID-19. From these experiences, lessons are emerging for new diagnostics and surveillance policies that will better prepare us for the potential next waves (Fineberg, 2020) . For the diagnostic aspect, we identify the following needs to be addressed.
Most current NAATs have analytical sensitivities and specificities around 95% or higher under ideal circumstances and when performed by skilled operators. Yet, in clinical practice, the sensitivity drops precipitously to 60-70% (Ai et al., 2020; Lassaunière et al., 2020) , necessitating re-testing that cause loss of valuable time in symptomatic patients (Weissleder et al., 2020) . The likely reason for this discrepancy is swabbing efficiency of nasopharyngeal, oral, sputum and bronchial samples. Some countries have instituted dual testing of nasopharyngeal and sputum/throat samples to increase the accuracy. Systematic research is needed to evaluate the efficacy of the swab material and RNA yields.
There is a need to develop rapid, antigen-based COVID-19 tests for a number of reasons, one of them being to reduce the complexities of lengthy RT-PCR. The development of NAATs was a reasonable emergency decision, considering NAATs' high analytical sensitivity and the short lead-time in assay development. But NAATs are generally processintensive, susceptible to contamination, and expensive. Antigen-based tests, on the other hand, could be a niche tool for cost-effective POC diagnosis at primary care settings. Such systems have already been developed for influenza (CDC, 2020c) . For example, the rapid influenza diagnostic tests (RIDTs), which detect the presence of influenza A and B viral nucleoprotein antigens, can identify flu patients with high specificity. RIDT-positive patients can receive necessary care after this quick (<20 min) test, and only negative samples need to be routed for laboratory molecular analyses. With effective COVID-19 antigen tests, a similar triaging strategy can be implemented to ease the demand for molecular tests.
Establishing effective serologic tests. As we transition to the flattening phase of COVID-19, the need for serologic tests will increase. Individuals, who have recovered from the disease or been asymptomatic, can use these tests to make informed decision on social activities; population-wide serologic screening will allow governments to learn the true extent of infections. A major issue with current serological tests is their high variability (Lassaunière et al., 2020) , and often low sensitivity and specificity. Comparison of different test kits, virtually all based on lateral flow assay format, has shown that some perform much better (Whitman et al., 2020) , which is presumably due to affinity reagents used. Identifying and synthesizing most immunogenic, high-affinity viral antigens is a critical step to improve diagnostic accuracy. Equally important is to conduct interference challenges to check how drugs, medications, and coagulation status affect serological testing outcomes.
Infection with SARS-CoV-2 is highly dynamic with viral titers and antibody levels changing over time in asymptomatic and symptomatic patients. Serial testing is necessary to identify patients before irreversible complications occur as well as to confirm full recovery. Accumulated data will further inform the duration of SARS-CoV-2 immunity as well as help us setting cutoffs for antibody positivity. We envision that integrating POC tests with digital networks will facilitate implementing such tasks. Patients at home, for example, can log in their test results and symptoms, and receive telemedicine feedback, which would be a cost-effective, safer caring model for stable or recovering patients. Digital services will also allow public health agencies to gather data from large population to track disease transmission in realtime.
Setting up global standards. New COVID-19 tests are approved based on their analytical validity, with sensitivity and specificity measured on manufacturers' own artificial samples. However, significant performance deviations have been reported from independent testings (FIND, 2020) . This situation demands developing global reference standards (e.g., pseudovirus, viral nucleic acids, viral antigen, antibodies)to enable objective inter-test comparison. Also necessary is to establish guidelines (e.g., sensor specifications, cost, accuracy) per different test purpose, for example, diagnosing acute infections in hospitals or long-term care facilities, at-home monitoring, and population survey. These efforts will provide benefits to both clinical and research communities, motivating technical innovations.
We should accelerate the development of new diagnostic methods. In particular, novel transducer technologies, such as nanoplasmonics, ion-gate transistors, and optical resonators, have exquisite sensitivities and potentially enable direct viral detection. Several exciting systems have already been reported; a graphene-based transistor with the LOD of 2.4 × 10 2 viruses/mL (Seo et al., 2020) ; and a plasmonic photothermal sensor that detected the RdRp target down to 0.22 pM (Qiu et al., 2020) . Pursuing these approaches would be critical to transcending current NAATs and realizing rapid, on-site diagnostics.
The authors declare no competing interests. (nasal, nasopharyngeal or throat) is eluted in the sample receiver containing elution/lysis buffer. After 10 sec mixing, the mixture is manually transferred to the test base holder (via transfer cartridge) that contains lyophilized NEAR agents. Heating, agitation, and detection by fluorescence are performed automatically by the instrument. The assay detects SARS-CoV-2 RdRp gene. Adapted with permission from Ref. (Nie et al., 2014) . Copyright 2014, American Society for Microbiology. Lysate is then added to SHERLOCK master mix, and the mixture is heated for 60 min at 60 °C. Test results are read out using lateral flow strips (2 min). (d) Nasopharyngeal swab samples from COVID-19 patients and controls were analyzed by STOP. The assay made correct diagnosis of these samples. (a, b) Adapted with permission from Ref. (Broughton et al., 2020) . Copyright 2020 Nature Publishing Group. (c, d) Adapted with permission from Ref. (Joung et al., 2020) . Antigen tests directly capture viral proteins or the whole virus, whereas in antibody tests, viral antibodies (e.g., IgG, IgM) generated from host immune response are captured by synthetic viral antigens or anti-human antibodies. Both tests use a reporter probe for signal generation. Virus neutralization tests check whether a specimen contain effective antibodies that can prevent viral infection on cells. (b) Positive rates of viral RNA and antibodies (IgG or IgM) were detected in 238 COVID-19 patients who were at different disease stages. Note that the antibody positive rates were low in the first five days after initial onset of symptoms, and then rapidly increased as the disease progressed. Adapted with permission from Ref. (Liu et al., 2020a) . (c) Similarity of coronavirus S and N proteins. Protein domains from different coronaviruses were compared to those of SARS-CoV-2 (top row, 100% concordance). S1 and S2 are subunits of S. Note that S1 has the least degree of similarity.
(d) Evaluation of S1 ELISA. SARS-CoV-2 S1 protein was used as a capture agent. Serum samples from healthy donors and patients either with non-CoV respiratory, HCoV, MERS-CoV, SARS-CoV, or SARS-CoV-2 infections were analyzed. S1ELISA showed no cross-reactivity with non-SARS serum samples. The dotted horizontal line indicates ELISA cutoff values, and the sample numbers are inside shaded rectangles. OD, optical density. (c, d) Adapted from Ref. .
|
Like other beta-coronaviruses, entry by SARS-CoV-2 involves its trimeric spike 55 glycoprotein, a type 1 fusion machine that undergoes large scale conformational changes between 56 prefusion and postfusion conformations to facilitate fusion of viral and host cell membranes. The 57 exact entry process for SARS-CoV-2 is still being defined, but is known to involve spike with spike trimer at a 6:1 molar ratio at pH 7.4 and collected single-particle cryo-EM data on a 100 Titan Krios. We obtained structures at 3.6-3.9 Å resolution and observed spike to bind ACE2 at 101 stoichiometries of 1:1, 1:2, and 1:3, with prevalences of 16%, 44%, and 40%, respectively 102 ( Figures 1A and S1 ; Table S1 ). ACE2 binding introduced trimer asymmetry. First, comparison of 103 single-RBD-up conformations of the spike, for ACE2-bound and ligand-free structures, revealed 104 recognition of ACE2 to induce a small (~2 Å) movement of RBD ( Figure S2) . Second, while the 105 membrane-proximal region of the spike in these complexes remained 3-fold symmetric, the ACE2- 106 binding regions showed asymmetry with, for example, superposition of the double-ACE2-bound 107 complex onto itself based on membrane-proximal regions leading to displacement of ACE2 108 molecules by almost 11 Å ( Figure 1B) . The full complement of trimer superpositions ( Figure 109 S3A) revealed preferential ways to align trimers moving from single-to double-to triple-ACE2 110 bound conformations. Analysis of domain movements indicated the large movement of RBD (from 111 down to up conformation) required to accommodate ACE2 binding to be accompanied by more 112 subtle movements of neighboring domains (Figure 2A) , and we delineated the coordinated 113 interprotomer domain movements that were involved in raising RBD (Figures 2B,C and S3 ). 114 However, the largest movement in S2 between single-and triple-ACE2-bound spikes occurred at 115 the flexible C-terminus of S2, with an overall rmsd for S2 subunit of <1 Å between single-, 116 double-, and triple-ACE2 bound trimers (Figure S3A,B) . Thus, ACE2-receptor engagement 117 required RBD to be in the 'up' position, but we could see no clear evidence for the binding of 118 ACE2 priming S2 for substantial structural rearrangement, beyond the raising of RBD and a 119 reduction of RBD interactions with S2. 7 particles had all RBDs down. For all three of these prevalent classes, unlike the consensus 146 structure, density for all RBD domains was well resolved (Figure S4C, panel C) , indicating 147 multiple different orientations of RBD in the spike at pH 5.5. In the remaining classes, the RBD 148 did not assume a defined position, suggesting RBD mobility at pH 5.5. 149 To determine how even lower pH impacted spike conformational heterogeneity, we sought 150 to obtain a cryo-EM structure of the ligand-free spike at even lower pH. We collected cryo-EM 151 datasets at both pH 4.5 and 4.0. Single particle analysis of the pH 4.5 dataset comprising 603,476 152 particles resolved into an all-RBD-down conformation, and we refined this map to 2.5 Å resolution 153 ( Figures 3B and S4 , Table S3); single particle analysis of the pH 4.0 dataset comprising 911,839 154 particles resolved into a virtually identical all-RBD-down conformation (root-mean square 155 deviation (rmsd) between the two structures of 0.9 Å) (Figures 3C and S4 , Table S3 ). The 156 similarity of the pH 4.5 and pH 4.0 structures indicated spike conformational heterogeneity to be 157 reduced between pH 5.5 and 4.5, and then to remain unchanged as pH was reduced further. The pH 158 4.0 map was especially well-defined at 2.4-Å resolution (Table S3) Refolding at spike domain interfaces underlies conformational rearrangement 163 To identify critical components responsible for the reduction of conformational heterogeneity 164 between pH 5.5 and lower pH and to shed light on the spike mechanism controlling the positioning 165 of RBDs, we analyzed rmsds between the pH 5.5 structures and the all-down pH 4.0 conformation 166 with an 11-residue sliding window to identify regions that refolded (Figures 4A, top, and S5) . The switch domain, which included aspartic acid residues at 830, 839, 843 and 848 and a 207 disulfide linkage between Cys840 and Cys851, was located at the nexus of SD1 and SD2 from one 208 protomer, and HR1 (in the S2 subunit) and NTD from the neighboring protomer. This region showed Unprotonated-switches were exemplified by switches B and C at pH 5.5 and perhaps best by 213 switch B in the pH 5.5 single-RBD up structure ( Figure 5A, C, (Figure 5C, left) . Notably, all four of the unprotonated-switch aspartic acids faced solvent 219 and appeared to be negatively charged. Figure S6B ). 269 With CR3022 IgG, apparent affinities to spike and RBD were sub-nanomolar at serological pH, 270 though with a 10-fold difference (0.49 and 0.052 nM to spike and RBD, respectively) ( Figure 7C ). 271 At pH 5.5, this 10-fold difference was retained (1.7 and 0.23 nM, respectively). However, at pH 272 4.5, CR3022 still bound to RBD (1.1 nM), but its apparent affinity to spike was dramatically 273 reduced with a K D >1000 nM -an apparent affinity difference we estimated to be >1000-fold 274 ( Figures 7C and S6C) . Because CR3022 still bound strongly to the isolated RBD, we attributed 275 the dramatically reduced apparent affinity of CR3022 for spike at low pH to conformational 276 constraints of the spike ( Figure 7D ). 277 Overall, the pH-induced retraction of RBDs through the spike adopting an all-down 278 conformation can be described as a "conformational masking"-energy barrier, which CoV-2 spike evasion from CR3022 neutralization to depend on the reduced affinity of al., 2020) provide additional contexts by which to interpret the structural results described here. 315 Benton and colleagues suggest three-ACE2 to destabilize the prefusion spike, but in the context of 316 ACE2-bound to '2P'-stabilized spikes, no substantial changes in S2 conformation were induced by 317 ACE2 binding. Meanwhile, the fascinating motions described by Ke and colleagues and by 318 Turoňová and colleagues involve regions of spike that are below the ordered regions of S2 that we 319 described here. Lastly, smFRET analysis suggest an on-path intermediate as the basis for the 320 observed ACE2-induced trimer asymmetry; it will be fascinating to see if smFRET analysis of 321 soluble trimers and at endosomal pH can provide insight into the pH-induced alterations in spike 322 conformation that we observe here. 323 We note that the critical switch region (residues 824-858) displays remarkable structural 324 diversity within coronaviruses, segregating into three structural clusters ( Figure S7) . Each of the 325 structures within these clusters generally comprises two helices, linked by a disulfide, in distinct (Punjani et al., 2017) . We note that some classes of unbound spike were also observed in both 562 datasets; however, particle picking was optimized for complexes so the fraction was low. The 3D 563 reconstructions were performed using C1 symmetry for all complexes as the ACE2-RBD region 564 showed flexibility that prohibited typical symmetry operations in the triple-bound complexes. 565 However, the RBD-ACE2 region was assessed in greater detail through focused refinement A829 D830 A831 G832 F833 I834 K835 Q836 Y837 G838 D839 C840 L841 G842 D843 I844 A845 A846 R847 D848 L849 I850 C851 A852 Q853 K854 F855 Late endosomeearly lysosome pH 5.5-4.5
In late endosome-early lysosome, the all-RBD down conformation of the spike induces shedding of antibodies
|
T he AAST patient assessment committee has recently published "Organ injury scaling 2018 update: spleen, liver and kidney." We appreciate the continual efforts of the committee to refine and improve the grading system, and we are excited to check a lot of anticipated changes to the nephritic trauma grading once in nearly three decades, addressing a variety of challenges that had become apparent over the years.
Initially outlined in 1989, the Organ Injury Scaling was basically and primarily based on the anatomic findings encountered mostly at the time of open exploration of the eviscerate organ; however, currently, with advances in CT technology and its widespread use, incorporating key radiologic findings within the grading system was logical. Incorporating the vascular supply is additionally a vital addition to the grading system. A depth of parenchymal laceration of 1 cm is used to separate grade 2 and 3 injuries. However, the principle behind selecting this cut for this purpose is unclear, and it should be capricious. Laceration depth offers very little information in predicting the requirement for interventions once further information like hematoma size and vascular distinction extravasation are obtainable. Another ambiguity is the use of segmental vein or artery injury within the organization, it is unclear if segmental injury is the description of vascular anatomy, one among the 5 segmental nephritic arteries supplying the kidneys.
Taken along, these updates within the nephritic injury grading replicate a number of this evidence in management of renal trauma. Anyone is required to know the implications of these changes and learn whether or not this updates grading system can improve the prediction of outcomes. Prophetic tools like normograms and a lot of objective criteria ought to be used in clinical follow-up to pick out patients who would have the benefit of interventions from trauma management after nephritic trauma.
The author declares no funding or conflicts of interest. I n late December 2019, several local health facilities reported clusters of patients with pneumonia of unknown cause that were epidemiologically linked to a seafood and wet animal wholesale market in Wuhan, Hubei Province, China. 1 Deep sequencing analysis from lower respiratory tract samples indicated a novel coronavirus, which was named coronavirus disease 2019 (COVID-19). 2 In severe cases, patients with COVID-19 develop a type of acute respiratory distress syndrome (ARDS), sepsis, and multiorgan failure. Moreover, older age and comorbidities are associated with higher mortality. 3 The fibrinolytic system is often suppressed during ARDS where fibrin accumulation can promote hyaline membrane formation and alveolar fibrosis. Depressed pulmonary fibrinolysis is largely due to increased levels of plasminogen activator inhibitor 1 in both plasma and bronchoalveolar lavage fluid. 4 Furthermore, it is observed that an endothelial damage that disrupts pulmonary regulation promotes ventilation-perfusion mismatch (the primary cause of initial hypoxemia) and develops thrombogenesis. 5 Fibrin deposition is the result of an imbalance of the coagulation and fibrinolytic pathways, and several therapeutic strategies have been explored to target the dysfunction of these systems in ARDS. 6 In particular, the use of fibrinolytic therapy (including plasminogen activators) to limit ARDS progression and reduce ARDS-induced death has received strong support from animal models. 7 Human studies are limited, although in a phase 1 clinical trial, Hardaway et al. 8 showed that the administration of urokinase or streptokinase resulted in a significant improvement of PaO 2 level in patients with severe ARDS secondary to trauma or sepsis. In this study, these patients had a PaO 2 level of less than 60 mm Hg, which increased to 231.5 mm Hg following thrombolytic therapy with an overall 30% survival rate and no incidence of bleeding. 6, 8 Previous data on fibrinolytic therapy in ARDS associated to the prothrombotic state and clinical findings with pulmonary vascular thromboocclusive disease in COVID-19 suggest that the use of tissue plasminogen activator (tPA) may have an impact in the treatment of severe COVID-19 induced ARDS, when all medical efforts and treatment options were exhausted. 9 The rational for fibrinolytic therapy is due to the pathologic fibrin deposition that reflects a dysfunctional clotting system, with enhanced clot formation and fibrinolysis suppression, related to tissue factor produced by alveolar epithelial cells and macrophages, and high levels of plasminogen activator inhibitor 1 produced by endothelial cells or activated platelets. 10 In COVID-19 pneumonia, the thrombi play a direct and significant role in gas exchange abnormalities and in multisystem organ dysfunction. The preserved lung compliance noted early in the course of COVID-19 patients with bilateral airspace opacities suggests that the observed pulmonary infiltrates could represent areas of pulmonary infarct and hemorrhage.
Therefore, thrombolysis could improve alveolar ventilation by restoring blood flow to previously occluded regions. This redistribution would reduce blood flow to vasodilated vessels, decreasing the shunt fraction and improving oxygenation. 11 Currently, there are only case reports and case series showing efficacy of tPA in COVID-19 patients with severe ARDS, demonstrating improvement of PaO 2 /FiO 2 ratio with no bleeding complications. In a recent case series of five COVID-19 patients with severe hypoxemia (PaO2, <80 mmHg) and D-dimer greater than 1.5 μg/mL, all subjects received a protocol including 25 mg of tPA intravenous bolus given for 2 hours, followed by 25 mg continuous infusion for the next 22 hours. Each patient was placed on a weight-based continuous heparin infusion after thrombolytic therapy. The administration of tPA was followed by an improvement of oxygen requirement in all patients, and three of them avoided mechanical ventilation after tPA infusion. 12 In another case report from three cases with COVID-19 and severe ARDS, Wang et al. 13 observed a transient improvement in the PaO 2 /FiO 2 ratio among two cases and sustained 50% improvement in one case following administration of 25 mg bolus of intravenous tPA followed by a further 25 mg infusion for 22 hours.
Presently, there is an ongoing phase 2 clinical trial, 14 open label, assessing the use of intravenous fibrinolytic therapy with tPA versus standard of care for patients infected with COVID-19 resulting in severe respiratory failure. Patients will be randomized in three groups. A control group will be standard of care according to the institution's protocol for ARDS. In the first experimental group, patients will receive 50 mg of tPA intravenous bolus for 2 hours, given as a 10-mg push followed by the remaining 40 mg over a total time of 2 hours. After tPA infusion, unfractionated heparin will be delivered intravenously, and the heparin drip will be continued to maintain the activated partial thromboplastin time at 60 to 80 seconds. In the second experimental group, patients will receive 100 mg of tPA intravenous bolus for 2 hours, given as a 10-mg push followed by the remaining 90 mg over a total time of 2 hours. Unfractionated heparin will be delivered in the same way as in the other experimental group. Primary outcome will be the PaO 2 /FiO 2 ratio improvement from pre-to-post intervention in all groups. The results from the phase 2a clinical trial conducted by Moore et al. 15 will be useful to develop a phase 3 clinical trial to assess the role of fibrinolytic therapy in COVID-19-induced ARDS and will be helpful for the medical community facing this challenge.
Because of the lack of effective therapies for refractory COVID-19 hypoxemic respiratory failure, where health care providers have used all available treatment options and where extracorporeal membrane oxygenation (ECMO) is not available (mainly in low-middle income countries), fibrinolytic therapy might be an option for a desperate clinical setting. However, even with a pathophysiologic rational, the lack of clinical studies turns clinicians uncomfortable regarding the management of fibrinolytic therapy to improve PaO 2 level in COVID-19 patients with refractory ARDS. Therefore, clinical trials with fibrinolytic therapy in COVID-19 patients with severe and refractory ARDS are urgently needed to elucidate this question.
|
Foodborne disease kills people. Reminders of this chastening fact include the devastating outbreak of Escherichia coli O104:H4 in Germany in 2011, in which 54 people died and 22% of the 3186 cases of E. coli O104 developed the severe complication, hemolytic uremic syndrome (HUS). Outbreaks are striking events, yet the percentage of all cases of foodborne disease that occur as part of outbreaks is fairly small. The first port of call when seeking to understand the prevalence of foodborne disease is the official bodies that have responsibility for monitoring illness in the population. However, the datasets amassed by these organizations tend to underestimate the population burden of illness so, in the past decade or so, there has been a proliferation of new methods across Europe in an attempt to overcome this deficiency and develop better estimates of the true population burden of illness.
The major sources of collated data on the prevalence of foodborne disease in Europe are the World Health Organization (WHO) Regional Office in Europe and the European Center for Disease Prevention and Control (ECDC). The WHO program covers 53 countries that make up the WHO European Region -from the Republic of Ireland in the West to the Russian Federation in the East, and Israel to the South. The ECDC compiles data from the 27 Member States of the European Union (EU) and three European Economic Association/European Free Trade Association countries. In addition, the European Food Safety Authority (EFSA) is responsible for analyzing data on zoonoses, antimicrobial resistance, and foodborne outbreaks submitted by EU Member States for preparing the EU Summary Report. This is undertaken in collaboration with the ECDC.
The View from the WHO Regional Office in Europe
In 2002, World Health Assembly mandated WHO to develop a global strategy for reducing the burden of foodborne disease. In this strategy, it is recognized that preventing foodborne disease and responding to food safety challenges need a timely, holistic, risk-based approach.
Information about the prevalence of foodborne disease in the WHO European Region is available from the Centralized Information System for Infectious Diseases (CISID). The CISID dataset, compiled from national reports, is underpinned by accurate and up-to-date population data for the WHO European Region and information collected by WHO is useful for risk assessment.
The CISID dataset covers a wide range of foodborne pathogens. In the WHO European Region, salmonellosis and campylobacteriosis are the most commonly reported foodborne diseases. Between 2000 and 2010 the highest incidence of campylobacteriosis was consistently reported from the Former Yugoslav Republic of Macedonia, where rates were almost three times as high as elsewhere in the region (Figure 1) .
The incidence of salmonellosis declined over the decades 2000-10 in several countries although Salmonella was still a frequent cause of the foodborne outbreaks ( Figure 2) .
The trend in listeriosis remained relatively stable over the decades 2000-10 ( Figure 3 ), but reporting of enterohemorrhagic E. coli (EHEC) was mainly from Western European countries ( Figure 4) .
Brucellosis remained a significant public health problem in the Central Asian republics. Trichinellosis in the Balkan countries and echinococcosis in both the Central Asian republics and the Balkan countries were causes for concern. Botulism remained relatively frequent in Eastern Europe and is mainly related to traditional ways of preserving foods at home.
Established in 2005, the ECDC is a EU agency that aims to strengthen Europe's defenses against infectious diseases. The Programme on Food-and Waterborne Diseases and Zoonoses was instituted in 2006, and covers a comprehensive range of pathogens. Early priorities included consolidating surveillance for salmonellosis, campylobacteriosis, verocytotoxin-producing E. coli/shiga toxin-producing E. coli (STEC) infection, listeriosis, shigellosis and yersiniosis; publication of an annual zoonosis report jointly with the EFSA; and developing novel methodology to estimate population exposure to salmonellosis and campylobacteriosis using seroepidemiology. In its 2011 annual epidemiological report, ECDC reported that Campylobacter rates are high and it remains the most commonly reported gastrointestinal illness across the EU. In 2009, there were over 201 000 confirmed cases (rate¼ 53 cases per 100 000 population). The highest rates were reported from the Czech Republic (193 cases per 100 000) and the UK (106 cases per 100 000).
By contrast, the incidence of Salmonella infection has decreased progressively since 2004 and this has been linked, at least partly, to effective programs to control infection in the poultry industry. There were over 111 000 cases reported in 2009 (rate¼ 23 cases per 100 000 population) and Salmonella continues to be a common source of outbreaks. Salmonella Enteritidis and Salmonella Typhimurium remain the most frequently identified serotypes but rates of S. Enteriditis infection were 24% lower in 2009 than in 2008 and rates of S. Typhimurium had also declined by 12%. Even in the higher incidence countries like the Czech Republic, Slovakia, Hungary, and Lithuania rates have also decreased markedly.
Reported cases of EHEC increased significantly between 2005 and 2009 to just over 3600 cases (rate ¼ 0.86 per 100 000 population). More than half of the cases were due to STEC O157. There were 242 confirmed STEC cases that developed HUS -a 66% increase in HUS cases compared with 2008. This large increase was, in part, explained by two sizable outbreaks of STEC -one in the UK linked to an open farm and a nationwide outbreak in the Netherlands associated with the consumption of STEC-contaminated beef steak tartare.
The ECDC report shows that some rare or uncommon gastrointestinal infections EU-wide can, nevertheless, affect particular subregions and countries. Brucellosis is reported mainly from Portugal, Spain, and Greece, where it is associated with goat farming. The majority of trichinellosis cases occurred in Bulgaria, Romania, and Lithuania, where it is likely to be associated with eating domestically reared pork and wild boar, and most confirmed echinococcosis cases were from Bulgaria, where increasing proportions of Echinococcus-positive cattle and sheep are also being reported. Overall, yersiniosis rates were decreasing but remain high in the Nordic states, Germany, the Czech Republic, and Slovakia, where infection is often associated with pork consumption. Confirmed case rates for listeriosis were highest in 2009 in Denmark (rate¼ 1.8 cases per 100 000 population), which has experienced a marked increase in cases, especially in people over 70 years of age. The increase was attributed, at least in part, to a surge in consumption of ready-to-eat (RTE) products in Denmark, especially in older people. A similar pattern was witnessed in the UK, where a doubling in cases of listeriosis in people aged more than 60 since 2001 was attributed to a combination of greater consumption of RTE products like sandwiches coupled with an increase in underlying medical conditions like cancer, requiring complex, immunosuppressive treatment.
The View from the EFSA The EFSA, in collaboration with the ECDC, produces an annual 'European Union Summary Report on Trends and Sources of Zoonoses, Zoonotic Agents and Foodborne Outbreaks.' In 2010, there were 5262 foodborne outbreaks reported in the EU -similar to the level reported in 2009. These outbreaks involved nearly 43 500 people, of whom approximately 4600 were hospitalized and there were 25 deaths. The evidence implicating a food vehicle was considered to be strong in 698 outbreaks.
Salmonella was the most frequently reported pathogen (30.5% of all outbreaks), followed by viruses (15.0%) and Campylobacter (8.9%) ( Figure 5 ). However, there was a considerable proportion of outbreaks in which the causative organism was unknown and a large percentage of Campylobacter outbreaks in which the evidence implicating a food vehicle was considered to be weak.
The most frequently reported food vehicles were eggs and egg products (22.1%); mixed or buffet meals (13.9%); vegetables, juices, and vegetable/juice products (8.7%); and crustaceans, shellfish, molluscs, and shellfish products (8.5%). An increase in the numbers of reported outbreaks caused by vegetables and vegetable products was attributed mainly to contamination with viruses.
It is becoming increasingly apparent that norovirus (NoV) is an important foodborne pathogen. Contamination can occur either at primary production, for example, shellfish, salad vegetables, and soft fruits at source or, at retail, for example, through inadequate practises by infected food handlers. Funded initially through a research grant, 'NoroNet' now comprises a voluntary collaboration between virologists and epidemiologists from 13 European countries who have been sharing surveillance and research data on enteric virus infections, mainly NoV, since 1999. There are international partners from North America, Australia, China, India, and New Zealand. The objectives of NoroNet include providing better estimates for the proportion of foodborne NoV infections and determining the high-risk foods associated with illness. Several publicly available analytical tools have been developed including a 'transmission mode tool' to increase the chances of identifying a foodborne source of infection. Using this tool to analyze 1254 outbreaks from nine countries reported between 2002 and 2006 showed that the proportion of NoV outbreaks that were likely to be foodborne was 22%. 'NoroNet' was also instrumental in identifying the latest pandemic strain -the Sydney 2012 virus, which has recently overtaken all others to become the dominant strain worldwide.
A major drawback of all surveillance systems, be they local, national, or international, is that they underestimate the true population burden of acute gastroenteritis and, in turn therefore, the true burden of foodborne disease. Surveillance systems eavesdrop on the clinical process. The greatest potential loss of information about illness in the population occurs when people do not access the healthcare system. In most countries, an episode of illness has no chance of being included in surveillance statistics unless the patient consults a doctor or nurse. Similarly, information on pathogens is only available if the doctor or nurse requests, and the patient provides, a sample for laboratory testing. National surveillance systems for foodborne disease in Europe operate in different ways. Some are sentinel, symptombased systems that collect little information on etiology. Others are based on laboratory-confirmed cases of infection. Laboratory testing protocols vary between laboratories within the same country, let alone between laboratories in different countries. Some cover the total population, while others include only a proportion of the total population. Most routine programs are passive, voluntary systems. The degree of underascertainment in many of the systems has not been measured, and all these factors conspire to make meaningful comparisons of disease rates between countries very difficult to accomplish.
The key to determining the real impact of foodborne disease on the population is to understand, first of all, the 'true' population burden of acute gastroenteritis.
There are several methodological approaches for estimating the incidence of acute gastroenteritis including retrospective cross-sectional surveys (telephone surveys of self-reported illness, door-to-door or postal questionnaire surveys) or prospective, population-based cohort studies (Table 1) .
Seven retrospective telephone surveys of self-reported illness have recently been completely performed in the WHO European Region (Table 1) . These have the advantage that large samples of the population can be contacted and interviews are relatively short so that participation rates tend to be good. The major disadvantage of telephone surveys and other types of surveys seeking information on symptoms is that the etiology of symptoms is not captured. They are also prone to inaccurate recall, especially if the recall period is fairly long.
Rates of self-reported illness in the general population ranged between 1.4 cases per person per year in Denmark and 0.33 cases per person per year in France. Comparing rates across nations can be difficult. Differences in case definitions, study designs, periods of recall symptoms, and the populations studied can all hamper incidence rate comparisons. For example, one of the studies highlighted in Table 1 only involved children. Nevertheless, using a standardized, symptom-based case definition enabled better comparison of rates between countries, and as the use of this case definition becomes more widespread some of these difficulties in interpreting rates between studies should diminish.
As well as determining disease rates information on healthcare usage in this series of coordinated, cross-sectional telephone surveys of self-reported illness was used to estimate under-reporting and underdiagnosis in the national surveillance systems of the countries taking part. Overall, underreporting and underdiagnosis were estimated to be lowest for Germany and Sweden, followed by Denmark, the Netherlands, UK, Italy, and Poland. Across all countries, the incidence rate was highest for Campylobacter spp. and Salmonella spp. Adjusting incidence estimates for biases inherent in different surveillance systems provides a better basis for international comparisons than relying on reported data.
Prospective studies are uncommon, perhaps because of their expense. Three such studies have been conducted in Europeone in the Netherlands and two in the UK. The major advantage of cohort studies is the ability to obtain samples from patients with infectious intestinal disease (IID) to confirm etiology, which is important if one of the aims is to calibrate national surveillance systems. A major drawback is that participation rates can be low and losses to follow-up may be high, but there are several strategies to try to overcome both of these important limitations.
In the UK, illness burden has been estimated in a population-based prospective cohort study and prospective study of presentations to primary care (the Second Study of Infectious Intestinal Disease in the Community (IID2 Study)). Up to 17 million people (approximately one in four) in the community were found to be suffering from IID in a year (annual incidence ¼ 0.27 cases of IID per person per year). There were approximately 3 million cases of NoV infection and 500 000 cases of campylobacteriosis. The estimated time taken off from work or school because of IID was nearly 19 million days. Approximately 1 million people presented to their primary healthcare team and the leading causes were NoV infection (130 000 cases) and campylobacteriosis (80 000 cases).
As well as defining illness burden, a secondary objective of the IID2 Study was to recalibrate national surveillance systems, i.e., to estimate the number by which laboratory reports of specified pathogens need to be multiplied to establish the actual number of infections in the community. So, for every case of IID reported to national surveillance centers in the UK, 147 cases had occurred in the community. For Campylobacter the ratio of disease in the community to reports to national surveillance was approximately 9 to 1, for Salmonella the ratio was approximately 5 to 1, and for NoV the ratio was almost 300 to 1.
A very powerful way to win the interest of politicians and policy makers is to be able to attach a monetary value to food-related illness. In developed countries, diarrheal disease can be belittled as inconvenient and unimportant alongside noncommunicable diseases like diabetes, heart disease, and stroke. Nevertheless, there can be considerable disruption to society and the economy. For example, the estimated costs of diarrheal disease are in the region of 345 million EUR in the Netherlands, 270 million EUR in Australia, and 2.8 billion EUR in Canada.
In the Netherlands, in 2009, the burden of NoV infection alone was estimated to be 1622 (95% confidence interval (CI) 966-2650) disability-adjusted life years (DALYs) in a population of 16.5 million, which is a large amount for what is generally held to be a very mild and self-limiting illness.
Having worked out the burden of acute gastroenteritis, the next rational step is to apportion illness burden by transmission route, namely foodborne transmission. Once again, several methodologic approaches are available, including epidemiologic and microbiologic approaches, intervention studies, expert elicitation, health economics assessments, and systematic reviews.
Outbreaks that have been meticulously investigated, i.e., where the evidence linking the outbreak to a food vehicle is strong, can provide useful information for subdividing diarrheal disease by transmission route. However, there are several limitations when interpreting the results. The first is the robustness of evidence incriminating a food vehicle in an outbreak in the first place. For example, in the EFSA Report published in 2010 only 698 of 5262 outbreaks were considered to provide strong evidence of a link to a food vehicle. Second, it has to be accepted that the distribution of food vehicles implicated in outbreaks is the same as the distribution of food vehicles responsible for sporadic cases of infection and this is a major assumption.
In the UK, in an attempt to estimate the impact of disease risks associated with eating different foods, over 1.7 million cases of UK-acquired foodborne disease per year resulted in almost 22 000 people being admitted to hospital and nearly 700 deaths. Campylobacter infection caused the greatest impact on the healthcare sector (nearly 161 000 primary care visits and 16 000 hospital admissions) although Salmonella infection resulted in the most deaths (more than 200).
In France, it has been estimated that foodborne pathogens cause between 10 000 and 20 000 hospital admissions per year. Salmonella is the most frequent cause of hospital admissions, followed by Campylobacter and Listeria.
The UK's Food Standards Agency estimates the cost of foodborne illness in England and Wales annually by assessing the resource and welfare losses attributable to foodborne pathogens. The overall estimated cost of foodborne illness annually in England and Wales has remained relatively constant since 2005 at approximately GBP 1.5 billion. For comparison, in New Zealand and the USA the costs are 216 million NZD and 152 billion USD, respectively.
In the Netherlands, the foodborne disease burden due to 14 food-related pathogens has been estimated using DALYs. This method for determining disease burden includes estimates of duration and takes into account disability weights for nonfatal cases and loss of statistical life expectancy for fatal cases. In total, there were an estimated 1.8 million cases of diarrheal disease and 233 deaths, of which approximately 680 000 cases and 78 deaths were allocated to foodborne transmission. The total burden was 13 500 DALYs. At a population level, Toxoplasma gondii, thermophilic Campylobacter spp., rotaviruses, NoVs, and Salmonella spp. accounted for the highest disease burden.
Similarly, the public health effects of illness caused by foodborne pathogens in Greece during 1996-2006 have been calculated. Approximately 370 000 illnesses per million people were judged to have occurred because of eating contaminated food. A total of 900 illnesses were found to be severe and three were fatal. The corresponding DALY estimate was 896 per million population. Brucellosis, echinococcosis, salmonellosis, and toxoplasmosis were the most common known causes of foodborne disease and accounted for 70% of the DALY estimate of 896 DALYs per million people.
Expert elicitation employs expert opinion to apportion pathogens according to foodborne transmission or transmission via other routes. An example of this is the Delphi method, which usually involves experts answering questionnaires in two or more rounds. After each round, a facilitator provides an anonymous summary of the experts' forecasts from the previous round as well as the reasons they provided for their judgments. The experts can then modify their earlier answers in response to the replies of other members of their panel. The range of the answers in each round tends to decrease so that the panel will converge toward a 'correct' answer. The Delphi technique is predicated on the basis that forecasts (or decisions) from a structured panel of people are more accurate than those from unstructured groups. Panels do not need to meet in person for the method to work.
Using structured expert elicitation, almost half of the total burden of diarrheal disease in the Netherlands was attributed to food. Toxoplasma gondii and Campylobacter spp. were identified as key targets for additional intervention efforts, focusing on food and environmental pathways. Not surprisingly, perhaps, a very high proportion of toxin-producing bacteria (Bacillus cereus, Clostridium perfringens, and Staphylococcus aureus) were considered to be predominantly foodborne. By contrast, multiple transmission routes were assigned to the zoonotic bacterial pathogens and protozoan parasite T. gondii although the food pathway was considered to be the most important.
An alternative way to assess the incidence of foodborne pathogens is to investigate exposure to them. Pioneered in Denmark and the Netherlands, an approach to studying infection pressure has been developed using serum antibodies to Campylobacter and Salmonella as biomarkers to estimate seroconversion rates. This shows that infections are much more common than clinical disease, probably because the majority of infections are asymptomatic. A great advantage of this method is that the assessment of incidence is independent of surveillance artifacts. The method confirms that comparing reported incidence between countries can lead to a totally false impression, even within the EU.
To pinpoint and then prioritize food safety interventions, the burden of food-related illness needs to be allocated to food commodities. Again, several methodologies exist.
The most persuasive evidence for the role of contaminated food items probably comes from studies that demonstrate the impact of interventions on human disease burden. For example, in the UK, where two population-based prospective cohort studies have taken place 15 years apart, there has been a marked fall in nontyphoidal salmonellosis in the community. The fall in incidence coincides closely with voluntary vaccination programs in broiler-breeder and laying flocks, and suggests that these programs have made a major contribution in improving public health, demonstrating the success of such concerted, industry-led action.
Natural experiments also illustrate the importance of poultry contamination as a major source of human Campylobacter infection. For example, in the Netherlands widespread culling of poultry that took place because of an avian influenza outbreak was followed by a decrease in Campylobacter infection in people, particularly in the areas where culling had taken place. Similarly, when contamination with dioxins caused poultry to be pulled from the supermarket shelves in Belgium the incidence of laboratory-confirmed Campylobacter infection in people fell.
The main applications of source or reservoir attribution using microbial subtyping have been to Salmonella and Listeria. Seroand phagetyping data tend to be used for this purpose. The underlying philosophy is controlling pathogens in the source or reservoir will avert subsequent human exposure, whatever transmission route or vehicle. Comparing results from animal and human surveillance programs provides insights about the major sources of disease in people.
In Denmark, a source attribution model has been developed to quantify the contribution of major animal-food sources to human salmonellosis. This showed that domestic food products accounted for over half of all cases, with over one-third of cases being attributable to table eggs. Nearly a fifth of cases were travel related, and in a similar proportion no source could be pinpointed. Nearly 10% of cases were attributed to imported food products and the most important source was imported chicken. Multidrug-and quinoloneresistant infections were rare in Danish-acquired infection and were caused more frequently by imported food products and traveling abroad.
Information from well-conducted outbreak investigations can be very useful for the so-called point of consumption attribution as they are gathered at the public health endpoint and can, therefore, be considered to be a direct measure of attribution at the point of exposure. One of the difficulties in using outbreak data, however, is that foods implicated in reported outbreaks are often complex foods, containing several ingredients or food items, any one of which might be the specific source of the pathogen. The method works best for pathogens where outbreaks are relatively common. So, for example, it is more robust for STEC and Salmonella than it is for Campylobacter, because Campylobacter outbreaks are rarely recognized. Using EU outbreak data, 58% of Salmonella cases that could be allocated to a source were attributed to contaminated eggs and 29% of Campylobacter cases that could be allocated to a source were attributed to contaminated poultry. However, for both pathogens the majority of cases could not be attributed to a source, illustrating another limitation of using outbreak data for these purposes.
In the UK, using outbreak data for point of consumption attribution showed that the most important cause of UKacquired foodborne disease was contaminated chicken and that red meat (beef, lamb, and pork) contributed heavily to deaths. The prioritization exercise that this type of analysis allowed showed that reducing the impact of UK-acquired foodborne disease was mainly dependent on preventing contamination of chicken.
Several case-control studies of sporadic salmonellosis and sporadic campylobacteriosis have been published, often using different methodologies and conducted in different settings. Systematic reviews consist of a formal process for literature review focused on a specific research question. In a systematic review of case-control studies and meta-analysis of 35 case-control studies of sporadic salmonellosis, traveling abroad, underlying medical conditions, eating raw eggs, and eating in restaurants were the most important risk factors for salmonellosis in the meta-analysis. Similarly, in a systematic review and meta-analysis of 38 case-control studies of sporadic campylobacteriosis, foreign travel, undercooked chicken, consumption of environmental sources, and direct contact with farm animals were all significant risk factors.
In a systematic review and meta-analysis of hepatitis E virus, occupational exposure to swine was found to be a more important route of transmission to humans than eating contaminated pork. This is an important finding requiring further exploration before any public health policy action in relation to food is implemented.
Most surveillance systems that capture information on etiology elicit information on known pathogens. Yet in the IID2 Study in the UK a known pathogen was assigned to 40% of community cases and 50% of the cases presenting to primary care. In the remainder of the stool samples submitted no pathogen was detected. So what about the rest? There is a number of possible reasons for the high percentage of cases with unknown etiology. First, it is possible that people reported transient changes in bowel habit not caused by IID. Second, these cases might have been caused by organisms not included in the diagnostic algorithms, like enteropathogenic or enterotoxigenic E. coli. Third, the cases might have been caused by putative pathogens like enterotoxigenic Bacteroides fragilis or Laribacter hongkongensis. Several coronaviruses, picobirnaviruses, pestiviruses, and toroviruses have recently been proposed as causes of IID, particularly in children.
Whole-genome sequencing techniques, though not yet enabled for widespread use, create enormous prospects for identifying novel pathogens that might be transmitted through the food chain.
Foodborne illness in Europe is an important public health problem no matter what method is used to measure its impact. If success in public health is defined by illness prevented, there is still a long way to go in controlling foodborne disease in Europe.
See also: Foodborne Diseases: Overview of Biological Hazards and Foodborne Diseases; Overview of Chemical, Physical, and Other Significant Hazards
|
One of the 'positives' to emerge from the COVID-19 pandemic in the UK has been a dramatic increase in the availability and use of remote consultations (4) . Driven initially by a need to protect and safeguard patients and healthcare professionals, the early experiences have shown that many routine reviews and some acute consultations can be successfully managed remotely.
Telemedicine or telehealth is becoming the new norm and can be used as an alternative to face-toface consultations, eliminating the risk of infection ( Figure 1 ) (5) . Any technology available such as phone and texting, email, and video, has now been employed to be able to provide therapy. By being able to employ many different means of communication, it makes telemedicine available to a greater number of patients. Even so, there can still be inequalities in accessing healthcare as within different communities there is a difference in availability of communications means. It has been clear from early experience in the UK that there is a great difference in the availability of high quality internet connectivity between families which has limited the use of some approved, data-secure platforms such as Attend Anywhere. The issue of health disparities, the gap in access and quality of care are still present. Solutions for the NHS have not been cheap. However, in the longer term these may be cost effective and eliminate some expenses e.g. travel, parking for families. Some aspects of routine care for children have been hampered. The significant reduction in availability of lung function testing for children with chronic respiratory diseases is a concern. In some instances, these have been partially overcome with provision of home testing with either peak flow meters or portable spirometers which allow more nuanced care and advice to be given (6) .
Engaging the public in planning and decision making, together with educating parents and children efficiently, has proven useful when implementing public health strategies (7). Some strategies appear to have an evidence base. For instance it is suggested that social distancing might be more adhered to if public health officials portray it as an act of altruism, giving a sense of duty to protect the child's loves ones (8) . The direct approach may also be helpful and has certainly been tried. For example, the Canadian Prime Minister specifically thanked children for their efforts which could only be accounted as an act to increase the feeling of social duty to the youth (9).
The universal use of face masks and the inclusion of younger children within any guidance is still being debated. When it comes to children there are more issues to consider including the availability of masks of different sizes to fit well on the face as well as the risk of suffocation in children younger than two years (10) . Additionally, when it comes to younger children, it is more challenging to persuade them not to take the mask off. Innovative ideas have started to emerge, with Disney designing fabric face masks with the children's favourite characters to help children in accepting to use them (11) .
Whilst initial data does not suggest that children with comorbidities are at particularly increased risk of severe COVID-19 disease (12) (13) (14) , the challenge of maintaining a good continuity of care for existing patients and adequate diagnostic care for children presenting for the first time remains.
Children with chronic diseases and their carers have been particularly anxious about the impact that COVID-19 could have on them. This can be partially resolved for many by maintaining communication with these families, providing reassurance, advising on hygiene measures, and educating on COVID-19. Where children fall outside clear guidelines, tailored and individualised plans offer reassurance for all of those involved with the provision of care.
At the start of this pandemic in the UK the advice given to the families with children with many chronic diseases was to shield the whole household to prevent the risk of severe illness. In hindsight, some of the advice was unduly cautious but faced with uncertainty public health authorities, paediatricians and primary care physicians erred on the side of caution. For some families, the increased anxiety may have longer term consequences. The act of shielding can have severe impacts on a child's physical and mental health. For example, going back to school could be beneficial for children with cerebral palsy or musculoskeletal problems as school provides developmental support and gives access to therapies.
As ongoing research suggests a low disease severity amongst the young and the negative effects of shielding, experts questioned whether shielding children with many comorbidities was ever justified. This led to the reformation of the strict guidelines. With social distancing rules being slowly lifted and school re-opening, the RCPCH has provided new guidelines for shielding (15) .
One of the medical sectors highly affected by the pandemic is the Emergency Department (ED) (16) . Normally, in the UK the emergency services are unnecessarily overused, leading to overcrowding and stretching of resources particularly during weekends and evenings. However, regional data suggests that there was a decrease of more than 30% in the cases of children presenting to the paediatric ED by March 2020 and this decline in activity has been maintained into the summer. This has certainly helped to prevent services from being deluged and allowed time for new processes and health protection procedures to be put in place. Whilst this change in behaviour could discourage unnecessary attendance to ED, it could also put at greater risk children with serious pathologies that require treatment.
In the UK, safeguarding has always been an important concern (17). During a time of a global pandemic, where focus is on the direct results of the disease, vulnerable children experiencing maltreatment and neglect at home are put on the side-line. Long home confinement, together with frustration, agitation, and aggression, creates opportunities to harm children. Moreover, the loss of the safety net provided by schools, social care and health professionals decreases the number of abuse cases reported. Without spotting narratives or signs of abuse, home becomes a very dangerous place for the vulnerable children.
Unfortunately, there is a trend of increasing incidences of domestic violence and calls to child support lines reported (18) . Children and teenagers exposed to violence, either as witnesses or victims could experience detrimental effects on their physical and mental health.
Incomplete immunisation has always been a worrying issue and unfortunately, during a pandemic, this issue can be easily neglected. This could expose communities at risk of an outbreak of a vaccinepreventable disease. For this reason, the WHO have declared immunisation as a core health service that should be safeguarded and conducted under safe conditions. Consequently, they have prepared documents that explain the reasoning behind this and respond to any questions the public or health authorities might have. (19, 20) .
In recent years, weight gain during periods of school closure, especially during summer vacations, has been a worrying issue amongst the paediatric community (21) . When comparing behaviours during summer and school season, accelerated weight gain is observed during the summer holidays (21) . A school closure during a pandemic is not equivalent to summer vacations. Nonetheless, there are distinct similarities such as the lack of structure during the day, the increase in screen time and a change in sleeping routines ( Figure 2 ). In fact, a small longitudinal observational study conducted in Verona Italy during this pandemic, has shown that the unfavourable trends in lifestyle discussed above, were observed amongst obese children and adolescents (22) .
The COVID-19 pandemic has further risk factors that might exacerbate the epidemic of childhood obesity (23) . Firstly, as out-of-school time has increased more than a regular summertime, it has increased the period that children are exposed to obesogenic behaviours. Secondly, parents are stocking up shelves with highly processed and calorie dense food. This action is justified by the need to maintain food availability and minimise the number of trips outside. This, however, exposes children to higher calorie diets. Thirdly, social distancing and stay-at-home policies introduce the risk of decreasing opportunities for physical activity. School physical activities were removed, playgrounds could not be kept clean, parks closed their gates, community centres offering afterschool programs shut their doors. Children living in urban areas, confined within small apartments are at a greater risk of adopting a sedentary lifestyle. Lastly, there has been an increasing trend on the use of video games which counts as a sedentary activity and leads to excessive screen use (24).
This obesogenic behaviour needs to be taken seriously and tackled as it could have profound consequences which are not easily reversible. Moreover, we should have in mind that adult obesity and its comorbidities are associated with COVID-19 mortality (25) , which raises the question whether overweight or obese children will have more severe repercussions upon contracting COVID-19.
Therefore, there is a need to maintain a structured day routine for the children which includes playtime and exercise time, a restriction on calories, a regular sleep pattern and supervision of screen time ( Figure 3 ).
Although emphasis is given to the obesogenic effects of the pandemic, there is also the issue of malnourishment as many students rely on school meals. In fact, school meals and snacks could represent up to two thirds of the nutritional needs in children in the USA (26) . In addition to not receiving the appropriate nutrition through school, children could be exposed to cheaper, unhealthy food choices. In the UK a partial safety-net has been established and maintained over the summer period following a successful campaign by the English footballer, Marcus Rashford (26) . This will offer some support to children who already received free school meals. However, not all children in the UK and many others worldwide will be protected. Insecurity over food availability causes longterm psychological and emotional harm to the children and parents. A collaboration amongst WHO, UNICEF and IFRC has provided comprehensive guidance to help protect schools and children, with advice in the event of school closure and for schools that remained open (28) . Important points in this guidance document are the emphasis given to a holistic approach towards children by tackling the negative impacts on both learning and wellbeing and to educate towards COVID-19 and its prevention. Even so, these guidelines can only be considered as checklists and tips for each government to use accordingly.
China provided a successful example of an emergency home schooling plan (29) , where a virtual semester was delivered in a well organised manner through the internet and TV broadcasts, yielding satisfactory results. However, digital learning is an imperfect system that brings to the surface the inequalities caused by poverty and deprivation. Many children have had either limited, shared or no online access either as a result of a lack of equipment (laptops or tablets) or internet access. Some parents struggled with the provision of adequate supervision. This issue was significantly more challenging for parents with additional educational needs e.g. ADHD or learning difficulties, for those with many children of different ages and for those children who might have additional carer responsibilities. It was also much more difficult for parents who were being expected to work from home at the same time. The full effects of a temporary 'pause' in many children's education in the UK remain to be seen but the effects are likely to widen the gap between children from more deprived backgrounds.
Since the beginning of the 21 st century, there have been several major disease outbreaks including the severe acute respiratory syndrome (SARS) in 2003, the H1N1 influenza pandemic in 2009 and the Ebola in 2013. However, mental health research was largely overlooked. The absence of mental health services during previous pandemics increased the risk of psychological distress to those affected (30) .
There are variable psychological manifestations as a result of a pandemic. Early childhood trauma can affect a child in many ways (31) . It can increase the risk of developing a mental illness and it can also delay developmental progress. Moreover, early childhood trauma in the form of adverse childhood experiences (ACEs) can have profound effects that manifest in later life such as an increase in substance abuse and problems with relationships or education as well as increase the risk of chronic diseases such as asthma, obesity and attention deficit hyperactivity disorder (32) . About 30% of isolated or quarantined children during the H1N1 pandemic in the United States of America met the criteria for post-traumatic stress disorder (PTSD) (33) . This study noted the lack of professional psychological support to these children during or after the pandemic. Out of the much smaller percentage of children that did receive input from mental health services, the most common diagnoses were anxiety disorder and adjustment disorder. Moreover, the same study also showed that one quarter of parents would also fulfil the criteria for PTSD which shows that parental anxiety and mental health can be reflected upon other members of the family including the children.
Events and conditions can have effects on our physical and mental health. These act as stressors or triggers and can predispose anyone, including children to adverse responses; either physical or psychological.
The stressors that could impact a child during a pandemic are shown in Figure 5 . The duration of the lockdown appears to be particularly important. Researchers have shown that the longer the quarantine, the higher the chances of mental health issues emerging in adults. It is unknown whether the same applies to children (8) .
Some children and young people (CYP) will be more vulnerable to the adverse consequence of any stressors and these pre-pandemic predictors should also be considered. In a child, such predictors could include the age of the child and a history of mental illness. There is a complex link between mental health and social background. Families with lower incomes face tough choices about how to cope with the day-to-day challenge of providing basic necessities (e.g. food, clothing and heating) and may be less able to give priority to the mental health of their children (or themselves).
When considering what can influence mental health, the link between physical and mental health should also be considered. The physical health of a child can be affected either directly or indirectly by COVID-19. This raises the question of how this could affect the psychological wellbeing too and if it would lead to a vicious cycle.
The sudden advance in telemedicine in the UK has been one of the unexpected changes brought about by COVID-19. It has been helpful in maintaining some health care for physical ailments ranging from acute illness to review of children with chronic conditions like asthma, diabetes and cystic fibrosis. It can also be used for psychological counselling for parents or children. This can help children learn how to cope with mental health problems via professional help within the security of their own house (34) . Telemedicine for mental health is already established in some countries. The psychological crisis intervention, is a multidisciplinary team program developed as a collaboration amongst a few Chinese hospitals that uses the internet to provide support (30) .
Telemedicine is most probably not sufficient in managing and providing for the mental health needs of the high numbers affected. When the already limited access to trained professionals is struck by a global pandemic, the shortage of professionals and paraprofessionals becomes noticeable and therefore the needs of the high number of patients are not met (35) .
When dealing with mental health a hierarchal and stepwise approach that starts in the community is helpful ( Figure 6 ). Integration of behavioural health disorder screening tools within the response of public health to a pandemic is crucial to cope with demand. These should recognise the importance of identifying the specific stressors and build on the epidemiological picture of each individual to identifying those CYP at greatest risk of suffering from psychological distress (36) .
Following the correct identification of patients in the community, the next step involves an appropriate referral. Interventions can vary according to the individual's presentation and can include psychoeducation or prevention education. Any behavioural and psychological intervention should be based on the comprehensive assessment of that patient's risk factors (35) . For instance, researchers have shown that specific patient populations such as the elderly and immigrant workers may require a tailored intervention (37, 38) . This hypothesis could therefore be stretched to children and teenagers as their needs are different to adults.
The rapid changes that both parents and children are forced to face and the uncertainty of an unpredictable future have been compared to the loss of normalcy and security that palliative care patients are faced with and paediatric palliative care teams may be well-placed to provide psychological support to families (39) .
Community organisations have a significant role to play in addressing mental health problems. They empower communities and provide tailored support. When this support comes from organisations which understand the community, the community's beliefs are embodied within the programs offered. It is therefore becoming obvious that best results will only be yielded if different bodies work together. This stresses how crucial it is to maintain good communication between community health services, primary and secondary care institutions. This is to ensure patients receive a timely diagnosis and better follow-up (35) . Conversely, poor communication could delay meeting the needs of the patients.
The internet is a potentially useful tool for the provision of mental health support. There are a lot of reliable resources online that anyone could use effectively without being in direct contact with professionals. Large organisations such as UNICEF have provided online documents to help teenagers protect their mental health during the pandemic. Many books about the current pandemic and its psychological impact are being released electronically for free for the public (34) . Likewise, online self-help interventions such as CBT for depression can be used by anyone experiencing such symptoms. This type of intervention and signposting of lower risk CYP and families to safe, well-constructed resources is highly efficient, allowing mental health professionals to focus more intensive interventions on higher risk individuals.
The online world is more easily accessible and much more appealing to older children and teenagers. As young people are becoming the experts of this virtual world, it is only logical to use social media for our benefit. Successful mental health campaigns in the past used hashtags on social media like Instagram and Twitter to increase awareness of mental health problems (40) . With more bloggers and social media influencers talking about mental health, together with the use of hashtags, the societal benefits become apparent. A strong feeling of empowerment is built that helps combat the stigma of mental health. Moreover, there is a therapeutic benefit through the provision of information on how to find professional support or self-management strategies. Even though online gaming can have negative impacts on young people's physical health, during a period of home confinement, it provides a mean for friends to stay in contact. Both online gaming and the yet extensive use of social media have the potential of bringing people closer together and gives a feeling of solidarity.
Regarding young children, using online resources on their own might not be an easy task, although they are surprisingly becoming experts of the web too. Nonetheless, there are many resources designed with the purpose to explain the pandemic to children and alleviate their anxiety as well as promoting good hygiene. A good example is the collaboration of Sesame Street with Headspace, a mindfulness and meditation company, to create YouTube videos that help young children tackle stress and anxiety. Parents, carers, or older siblings could all help the young ones to access these resources.
The power of technology and the artificial world is becoming a turning point for societies today. Interestingly, an artificial intelligence program has been created with the purpose of identifying people with suicidal ideations via scanning their posts on specific social media platforms. Once identified, volunteers take appropriate steps to help these individuals. During a time of extensive home confinement, where the use of the web becomes even more prominent, such programs might again provide a service to society by identifying teenagers struggling with mental health (41) .
Healing through creative expression is a popular tool amongst child therapists and is has proven useful in previous pandemics too (31) . During the Ebola break, a dynamic art program was established for Liberian children. It focused on how therapeutic expressive arts could teach coping skills and build healthy relationships in a safe and supportive space for children to express themselves and experience healing; interventions that showed positive results early on.
The advantage of art programs like this is that once they are built by mental health professionals and child health specialists, they can be delivered in the communities using paraprofessionals who receive appropriate training. This allows the projects to be implemented at a wider area while more children benefit from.
Parents should provide a core pillar of support for children, with school and teachers, the rest of the family and friends providing robust supporting pillars. With home confinement, these supporting pillars break down and the parent becomes the only resource for a child to seek help from. When the wellbeing of the child is at risk, it is important for parents to monitor the children's behaviours and performance.
Open communication is necessary to identify any issues; physical or psychological. Having direct conversations about the pandemic can prove useful in mitigating their anxiety. A common notion is for parents to shield their children from bad news to protect them. It is true that most of the information about the pandemic that children are exposed to is not directed to them and it hence becomes overwhelming. However, children will still ask important questions and ask for satisfactory answers. Shielding the world from them is not the right answer.
Parents should practice active listening and responding appropriately to any questions the child might have as well as adapting their responses to the child's reactions. Narrating a story or encouraging the child to draw what is on their mind might enable the start of a discussion. Attention also needs to be given to any difficulties sleeping and the presence of nightmares as it could be a sign that the child is not coping well. Although strict monitoring of behaviours is required, it is a very delicate issue and it should not put the child into an uncomfortable position. With daily home confinement, it is important to respect the children's privacy and identity.
Whilst it may seem overwhelming to families to provide all the information required to children there are already many reliable resources available online about how best to maintain the health and wellbeing of their children through this pandemic. UNICEF has provided online resources for parents to use with emphasis given on how to talk to a child about the pandemic and provide comfort. Similarly, the WHO have also provided a series of posters on parenting during the pandemic again with the purpose of promoting the wellbeing of children.
Parents and carers are often portrayed as superheroes. However, even these superheroes may experience anxiety and fear during a pandemic. The psychological health of parents and their children seem to be inextricably linked and PTSD is commoner in children whose parents are experiencing it too (33) .
As parents with poorer mental health might not be able to respond to the needs of their children effectively, then addressing this is important. Alleviating their stressors will help improve mental health. Priority should be given in ensuring that the basic needs of these families are met, including food provisions, financial support, and healthcare access. Once these are provided, there are higher chances that any psychological support can have a positive effect on parents. Such support can have multiple different dimensions and various tools can be utilised. Such tools should be easily accessible, put to practice quickly and aim to strengthen mental resiliency. For instance, behavioural practitioners are suggesting the use of acceptance and commitment therapy (42).
Anecdotally the consequence of a prolonged period of lockdown for some families in the UK has been a positive one. Home confinement can benefit interactions and help children to engage in family activities. This may help strengthen family bonds and meet the psychological needs of a developing mind. Collaborative games fight loneliness and strengthen family relations. Other activities that families could do together include learning new skills like cooking or taking up new hobbies like building puzzles. Researchers have also shown that during the brief school closure due to the A/H1N1 influenza pandemic in 2010, as time went by, parents became more prepared and started planning more activities. This gave more reasons to young people to stay at home and helped in encouraging social distancing; which proved harder at the start of the influenza pandemic (43).
The impact of the COVID-19 pandemic goes beyond the risk of a severe acute respiratory response. It has posed severe social and economic consequences worldwide. Children and young people have been exposed to very severe repercussions which if not addressed, could have even worse outcomes in the future. Therefore, governments, communities, non-governmental organisations and healthcare professionals need to work in collaboration to prevent causing irreversible damage to a generation.
|
The ongoing COVID-19 pandemic remains an important healthcare challenge. 1 Data on the course of inflammatory rheumatic diseases during the pandemic are scarce. 2 Partial or complete closure of rheumatology services was experienced in many countries as part of virus containment measures and transient lockdown of public life. 3 It remains unclear, whether remote consultation strategies might partly compensate for lower numbers of face-to-face visits to prevent a postponement of treatment decisions. 4 Additional factors may also potentially contribute to disease worsening during the pandemic. Some patients may choose to preventively stop immunosuppression out of fear of complications. 2 Moreover, the psychological stress (anxiety about a new disease, economic pressure, less recreational opportunities and so on) encountered during the pandemic should not be underestimated. 5 The aim of this study was to assess the
What is already known about this subject? ► Partial or complete closure of rheumatology services was experienced in many countries as part of SARS-CoV-2 containment measures. ► Remote consultation strategies might partly compensate for lower number of face-to-face visits to prevent a postponement of treatment decisions for inflammatory rheumatic diseases.
What does this study add? ► In this real-life cohort study of patients with axial spondyloarthritis, rheumatoid arthritis and psoriatic arthritis with available patientreported disease activity assessments during the first wave of the COVID-19 pandemic via a web-based application after a drop in faceto-face consultations no increase in disease activity could be observed. ► Although the proportion of patients with medication non-compliance slightly increased during the pandemic, the proportion of patients with disease flares remained stable. ► The patient population followed here used a smartphone app regularly and might be more invested in disease management. The results have to be interpreted in this light.
How might this impact on clinical practice or future developments? ► The lack of a major detrimental effect of a short interruption of physical consultations on the disease course of several inflammatory rheumatic diseases informs potential future measures of public lockdown. ► As patient-reported outcomes are insufficient to guide treat-to-target efforts, assessments of long-term outcomes are warranted. ► Future studies are needed to confirm the usefulness of remote strategies to regularly assess patient-reported outcomes.
course of self-reported disease activity and of drug adherence in patients with axial spondyloarthritis (axSpA), rheumatoid arthritis (RA) and psoriatic arthritis (PsA) before, during and after the initial COVID-19 wave in Switzerland.
The specific COVID-19 situation in Switzerland in the first 6 months of 2020 is detailed in the supplementary appendix. According to the described longitudinal course of SARS-CoV-2 infection numbers, we defined three study periods of 2 months duration each: (1) a pre-COVID-19 wave phase from 1 January to 29 February 2020; (2) a COVID-19 wave phase from 1 March to 30 April 2020 and (3) a post-COVID-19 wave phase from 1 May to 30 June 2020 ( figure 1A ).
Patients diagnosed as having axSpA, RA or PsA in the Swiss Clinical Quality Management (SCQM) cohort [6] [7] [8] were included if at least one patient-reported disease activity measure was available in each of the study periods defined above, irrespective of whether the assessment was performed during consultations or remotely via a web-based application. All patients currently followed in SCQM, defined as patients with at least one visit during the last 18 months, served as control. The voluntary use of the app by the patients to monitor disease activity and drug compliance monthly started on 1 January 2019. 9 Additional information provided to patients during the pandemic is compiled in the supplementary appendix.
All patients gave informed consent prior to data collection. Ethical approval was given by the Geneva cantonal committee for research ethics (2020-01708).
Patient-reported disease activity assessments included the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) in axSpA, 10 the Rheumatoid Arthritis Disease Activity Index-5 (RADAI-5) in RA 11 and the Patient Global Assessment (PGA) visual analogue scale for disease activity in PsA, 12 both during visits and for app entries. Disease activity measures were investigated for each 2-month period as previously defined. A clinically important worsening in individual patients from period 1 to 2 and from period 2 to 3 was defined as follows: BASDAI showed increase of 2 points in axSpA; RADAI-5 showed increase of 1.4 points in RA 11 and PGA showed increase of 1.2 points in PsA. 12
All other answers except 'yes' to the question 'Do you take the following medication regularly?' in the monthly app questionnaire were considered as non-compliance with prescribed medication (online supplemental information).
McNemar's test was used to compare the proportions of patients with drug non-compliance or experiencing a disease flare and the paired t-test was used to compare disease activity scores between two subsequent periods.
The monthly number of patients consulting rheumatologists and the monthly number of patients with app entries for 2019 and the three periods of interest in 2020 are depicted in figure 1B . The number of visits declined by 52% with the implementation of virus containment arrangements from n=543 in February to n=262 in April 2020. Given measures taken to motivate patients to use the app to enter disease activity and their willingness to contribute to shared decision making and research, this was paralleled by an increase in app entries (from 521 to 1195).
Baseline characteristics of 287 axSpA, 248 RA and 131 PsA patients fulfilling the inclusion criteria are shown in table 1. The patients in the individual disease categories were comparable with the respective group of all SCQM patients currently followed in SCQM, with the exception of the subset of patients with RA, which was younger and had a slightly lower disease activity score at inclusion (online supplemental table S1). The low number of face-to-face visits precluded a comparison between patients with clinical visits and remote data entries. The majority of patients (>70%) were treated with a biological disease-modifying antirheumatic drug (bDMARD) at the study start with the proportion of patients on synthetic DMARDs depending on the underlying disease (table 1). The prepandemic proportion of patients with non-compliance to the prescribed medication was around 15%. There was a slight increase in the number of non-adherent patients during the pandemic, the difference to the pre-pandemic numbers reaching statistical significance in axSpA (table 1) . Adherence returned to prepandemic levels in the post-COVID-19 phase.
Patient-reported disease activity outcomes were stable over the first 6 months of 2020 (figure 2 and table 1) , with a slight decrease during the pandemic wave, reaching statistical significance in axSpA (mean (SD) BASDAI 3.40 (2.23) before the pandemic and 3.23 (2.25) during the pandemic, p=0.02). To put the disease activity scores in a broader perspective, monthly median values from all SCQM patients are shown separately for physical consultations and remote app entries from January 2019 to June 2020 in the online supplemental figures S1 and S2. The proportion of patients with a disease flare during the pandemic wave was <15% for all three diseases (table 1) and no statistical significance could be found when compared with the proportion with disease worsening in the pre-COVID-19 phase.
A web-based smartphone application had been implemented within the Swiss registry long before the current pandemic and allowed us to follow the course of inflammatory arthritides over the whole initial COVID-19 wave. We noted an acute drop in clinical encounters that was paralleled by an increase in app entries. Our study demonstrates that disease activity as assessed by the BASDAI in axSpA, the RADAI-5 in RA and PGA in PsA remained stable and even slightly decreased over the duration of the pandemic wave at the population level. Moreover, a disease flare occurred in <15% of patients, not statistically different from the pre-COVID-19 phase. Although cut-offs for a clinically important worsening exist for the patient-reported outcomes used here for RA and PsA, there is no consensus for a BASDAI cut-off in this regard. We have used a worsening by two points as its performance was comparable with the defined Ankylosing Spondylitis Disease Activity Score cut-off against the external standard 'patient-worsening'. 13 Patient-reported worsening was investigated in a recent observational study in patients with RA and patients with axSpA and was experienced by 29% of patients over a duration of 3 months. 14 The results presented here can only be interpreted in the context of a rather short first COVID-19 pandemic wave as encountered Table 1 Baseline characteristics of patients, mean disease activity scores as well as number of disease flares and of drug noncompliance cases in the respective 2 months before, during and after the COVID-19 wave in Switzerland
in Switzerland. A recent international survey in 35 EULAR (European League Against Rheumatism) countries found that a partial closure of rheumatology services of 5-8 weeks duration during the COVID-19 pandemic was reported by 81% of 1428 respondents, 3 underscoring the representativeness of our data. Current guidelines based on preliminary data do not recommend the preventive cessation of immunosuppressive medication in the absence of infection. 15 16 To continue or to stop medication in individual situations during the COVID-19 pandemic ultimately is part of a shared decision-making process between the patient and his rheumatologist. We have therefore focused on patient-reported non-adherence to the medication entered in the database by the rheumatologist and not on actual drug changes. We hypothesise that the duration of the pandemic was too short for the documented transient decrease in drug adherence to be reflected in an increase in disease flares.
Regular assessments of disease activity is a key component of the treat-to-target principle in the management of rheumatic diseases. In addition to the voluntary reporting of disease activity by the patients, we assume an important increase in the number of remote patient-physician interactions (email and phone calls) during the pandemic. Although their actual figures remain unknown, the influence of telemedicine on the outcome presented here should not be underestimated. 4 We acknowledge the fact that patient-reported measures cannot replace clinical examination. Recent data have suggested that their exclusive use might be insufficient to guide treat-to-target efforts. 17 In the absence of alternatives in the context of suspended visits to physicians, their use is however warranted.
An important limitation of this work is that we could only evaluate patients with regular assessments of disease activity, which was mostly based on remote data entries during the pandemic. This subset using the smartphone app is probably more invested in disease management and the non-compliance figures might be under-represented.
In conclusion, a temporary interruption of in person consultations during the COVID-19 pandemic had no major detrimental impact on the disease course of patients with inflammatory rheumatic diseases as assessed through patient-reported outcomes.
|
Introduction to eco-efficient materials for reducing cooling needs in buildings and construction Fernando Pacheco-Torgal C-TAC Research Centre, University of Minho, Guimarã es, Portugal
According to Watts (2018) , in February of 2017 the temperatures in the Arctic remained 20°C above average for longer than a week, increasing the melting rate. As a consequence, the replacement of ice by water led to a higher absorption of solar radiation, making the oceans warmer and being responsible for basal ice melting (Tabone et al., 2019) and also for a warmer atmosphere (Ivanov et al., 2016) . This constitutes a form of positive feedback that aggravates the aforementioned problem. Wadhams (2017) already stated that an ice-free Arctic will occur in the next few years and that it will likely increase the warming caused by the CO 2 produced by human activity by 50%. The latest data on rates of melting combined with new models suggest that an ice-free Arctic summer could occur by 2030 (Screen and Deser, 2019; Bendell, 2019) . The warming of the earth will also result in extensive permafrost thaw in the Northern Hemisphere. With this thaw, large amounts of organic carbon will be mobilized, some of which will be converted and released into the atmosphere as greenhouse gases. This, in turn, can facilitate positive permafrost carbon feedback and thus further warming (Schuur et al., 2015; Tanski et al., 2018) . Turetsky reported that permafrost thawing could release between 60 and 100 billion tonnes of carbon. This is in addition to the 200 billion tonnes of carbon expected to be released in other regions, which will thaw gradually. Also, recent fires in the Arctic region (Siberia and Alaska) have emitted millions of tons of CO 2 , constituting another positive feedback situation, and it is expected that in future fires will occur more frequently (Chen et al., 2016; Riley et al., 2019; Schirmeier, 2019) . Vicious cycles of drought, leading to fire, leading to more drought have a positive feedback effect that further aggravates carbon dioxide emissions and global warming. Gasser et al. (2018) stated that the world is closer to exceeding the budget (cumulative amount of anthropogenic CO 2 emission compatible with a global temperaturechange target) for the long-term target of the Paris Climate Agreement than previously thought. Also, according to Xu et al. (2018) , three lines of evidence suggest that the rate of global warming will be faster than projected in the recent IPCC special report. First, greenhouse-gas emissions are still rising. Second, governments are cleaning up air pollution faster than the IPCC and most climate modelers have assumed, but aerosols, including sulfates, nitrates, and organic compounds, reflect sunlight so the aforementioned cleaning could have a warming effect by as much as 0.7°C. Third, there are signs that the planet might be entering a natural warm phase, because the Pacific Ocean seems to be warming up, in accord with a slow climate cycle known as the Interdecadal Pacific Oscillation, which could last for a couple of decades. These three forces reinforce each other. Kareiva and Carranza (2018) stated that positive feedback loops represent the gravest existential risks, and the risks that society is least likely to foresee. To make things worse, the current climate emergency is also impacting microorganisms, not only exacerbating the impact of pathogens and increasing disease incidence, but also having a positive feedback effect on climate change (Cavicchioli et al., 2019) . Recently Bamber et al. (2019) found that future sea level rise with the inclusion of thermal expansion and glacier contributions results for 2100 will exceed 2 m, which is more than twice the upper value put forward by the Intergovernmental Panel on Climate Change in the Fifth Assessment Report. This is especially worrisome, because 90% of urban areas are situated on coastlines, making the majority of the world's population increasingly vulnerable to the current climate emergency (Elmqvist et al., 2019) .
At the same time, the United Nations estimates that by 2030, 700 million people will be forced to leave their homes because of drought (Padma, 2019) . Drought and heat waves associated with this climate emergency are responsible for damaging crop yields, deepening farmers' debt burdens, and inducing some to commit suicide. A study by Carleton (2017) shows that in India over the last three decades, the rising temperatures have already been responsible for over 59,000 suicides. And this raises the additional issue of determining which countries should take responsibility for climate refugees (Bayes, 2018) .
It is no wonder that Wallace-Wells (2017) wrote about catastrophic scenarios that include starvation, disease, civil conflict, and war. Even the discreet and circumspect Joachim Schellnhuber, professor of theoretical physics, expert in complex systems and nonlinearity, and founding director of the Potsdam Institute for Climate Impact Research (1992 Research ( -2018 and former chair of the German Advisory Council on Global Change, spoke out more strongly in his foreword for the paper by Spratt and Dunlop (2018) , in which he wrote: "climate change is now reaching the end-game, where very soon humanity must choose between taking unprecedented action, or accepting that it has been left too late and bear the consequences." Torres (2019), on the other hand, has gone beyond the pessimistic forecasting by mentioning a hypothetical "double catastrophe scenario" in which an ongoing "stratospheric geoengineering project" is interrupted by a destabilizing event-e.g., a terrorist attack, or interstate or civil war-that could have unpredictable consequences for the global climate, for instance bringing about massive agricultural failures. And these are not unrealistic scenarios, because the world economy is so entangled that any random event could have massive consequences, especially for poor people. Coincidentally or not, on September of 2019 several drones attacked the Abqaiq facility in Saudi Arabia, the most important oil processing facility in the world, worsening an already unstable world economy (FT, 2019) . This is a clear sign of the consequences of a world economy addicted to nonrenewable resources that are located in one of the most unstable regions in the world.
In July of 2018, Professor Bendell authored a dramatic piece warning of the probable social collapse, articulating the perspective that it is now too late to stop a future collapse of our societies because of the current climate emergency, and that we must now explore ways in which to reduce harm. He called for a "deep adaptation agenda" that would encompass: "withdrawing from coastlines, shutting down vulnerable industrial facilities, or giving up expectations for certain types of consumption" (Bendell, 2018) . The essence of this deep adaptation lies in the "four Rs":
1. Resilience: What do we most value and want to keep? 2. Relinquishment: What must we let go of? 3. Restoration: What skills and practices can we restore? 4. Reconciliation: What can we make peace with to lessen suffering?
And in December of the same year, Read (2018) also shared his views about the dramatic future of our planet and the fate of humanity. Some say Bendell has gone too far in his pessimistic views, but a professor of physics at the University of Oxford wrote the following in a paper published in August of 2019: "Let's get this on the table right away, without mincing words. With regard to the climate crisis, yes, it's time to panic" (Pierrehumbert, 2019) . In a presentation that Bendell gave in May of 2019 at the European Commission, he mentioned the importance of technologies for deep adaptation (Bendell, 2019) , but of course he only named a few. Be that as it may, he was not even considering long-term scenarios defended by Baum et al. (2019) . Also on November 5, 2019, Ripple et al. (2019) , along with several thousand scientists, issued a warning: "Clearly and unequivocally the planet Earth is facing a climate emergency," which is none other than tacit support for Bendell's views.
1.2 Heat waves, urban heat island, and cooling materials as a way to save lives in the context of the coronavirus recession
According to the IPCC, heat waves are the most important and dangerous hazard related to the current climate emergency. Kew et al. (2019) reported that anthropogenic climate change has increased the odds of heat waves at least threefold since 1950, and across the Euro-Mediterranean the likelihood of a heat wave at least as hot as summer 2017 (responsible for temperatures above 40°C in France and the Balkan region and nighttime temperatures above 30°C) is now on the order of 10%. The negative impacts of extreme heat, which are more acute in urban areas, include health risks, higher concentrations of pollutants (Meehl et al., 2018) , lower water quality, and decrease in labor productivity. Ironically, Belkin and Kouchaki (2017) even found that heat increases fatigue, which leads to reduction in positive affect, subsequently reducing individual helping. Also, those most affected are the most vulnerable groups among the urban dwellers: the elderly, the individuals with preexisting chronic conditions, communities with weak socioeconomic status, people with mental disorders, and isolated individuals (Smid et al., 2019) . In this context it is worth remembering that in 2003 a European heat wave claimed the lives of several thousand people and in 2010 Moscow was hit by the strongest heat wave of the present era, killing more than 10,000 people. Europe, with its growing aging population trend, a population which is more susceptible to heat-wave effects, will in that context be hit in a harder way ( Fig. 1.1 ). If no adaptation measures are undertaken, this could mean an additional several thousand deaths/year from heat waves (and their synergistic effects with air pollution). The consequences associated with heat wave predictions do not even take into account the effect associated with urban heat islands (UHIs). This phenomena is triggered by absorption radiation due to artificial urban materials, transpiration from buildings and infrastructure, release of anthropogenic heat from inhabitants and appliances, and the airflow blocking effect of buildings (Mirzaei and Haghighat, 2010; Pacheco-Torgal et al., 2015) . The dark-colored surfaces used (such as dark asphalt pavements) have low reflecting power (or low albedo characteristics); as a consequence they absorb more energy and in summer can reach almost 60°C, thus contributing toward greater UHI effects. UHI is probably the most documented phenomenon of the current climate emergency for various geographic areas of the planet, with a huge increase in the number of publications appearing on this topic since 1990 ( Fig. 1.2) . This may have something to do with the beginning of the sustainable development movement after the publication of the Brundtland report (Our Common Future) (Brundtland, 1987) . As a result of massive urbanization and industrialization of human civilization in the last few decades, UHI has gained a dramatic dimension that jumpstarted the publications in this field. In the future this urbanization trend is expected to become even worse; according to Guerreiro et al. (2018) by 2050 urban systems will be home to 66% of the global population, with the proportion being even higher in the European Union, where currently 75% of the population resides in cities with expected growth to 82% by 2050. At that point, UHI will become more and more important, having more dramatic consequences. Some authors have reported a 10°C temperature increase in the city of Athens due to the UHI effect (Santamouris et al., 2001) and an 8.8°C increase in London (Kolokotroni and Giridharan, 2008) , while a recent 3-year investigation in the city of Padua reported an increase up to 6°C (Busato et al., 2014) . According to Li et al. (2014) , even the waste heat discharged by air conditioners alone was responsible for an increase of almost 2°C in Beijing average air temperature in 2005.
Recent projections show that in the northwest area of the United Kingdom, summer mean temperatures could rise by 5°C (50% probability, 7°C top of the range) by the 2080s (Levermore et al., 2018) . The expected rise in global temperature is likely to increase the energy needed to cool buildings in the summer. Balaras et al. (2007) mentioned an increase of energy cooling needs more than 2000% between 1990 and 2010. Also, the synergistic effect between heat waves and air pollution causes worse outdoor air quality in the summer and prevents natural ventilation, thus increasing cooling needs. In the heavily polluted city of Beijing, Li et al. (2014) reported that 28.88% of the total air-conditioning energy consumption is due to the UHI effect. Manoli et al. (2019) studied data from some 30,000 cities worldwide, and concluded that cities having a more desert-like surrounding countryside can more easily achieve cooler temperatures by use of careful plantings than cities surrounded by tropical forests, which need far more green spaces to reduce temperatures, thus creating more humidity. In these latter areas, other cooling methods are therefore expected to be more effective, such as increased wind circulation, more use of shade, and new heatdispersing materials. Shandas et al. (2019) used a combination of ground-based measurements and satellite data that accurately identified areas of extreme urban heat hazards. The results showed that the urban microclimate was highly variable, with differences of up to 10°C between the coolest and warmest locations at the same time. Sailor et al. (2019) reported that building occupants in many US cities rely only on air conditioning, to a degree that their health and well-being are compromised in its absence. They found that residential buildings are highly vulnerable to heat disasters and that situation will be exacerbated by intensification of UHIs. A recent review by Santamouris (2019) included a projection that the mortality of the elderly population in Washington State will increase between 4 and 22 times by 2045, and heat-related mortality in three cities in the northeastern United States will increase six to nine times by 2080 under the high-emission scenario, RCP 8.5. Of course, these projections do not account for the economic recession caused by the coronavirus (Michelsen et al., 2020; Fernandes, 2020; Leiva-Leon et al., 2020) , which in turn will reduce the number of those who can afford air conditioning. Some estimates are so pessimistic (Sraders, 2020) as to imply that air conditioning could become a luxury expense.
On the positive side, Macintyre and Heaviside (2019) concluded that cool roofs could reduce heat-related mortality associated with the UHI effect by $25% during a heat wave. This shows how cooling materials can be important in saving lives, especially in the context of the coronavirus recession. Bai et al. (2018) advocated that research on mitigating urban climate change and adapting to it must be supported at a scale commensurate with the magnitude of the problem, and that funding agencies need to provide grants for cross-disciplinary research and comparative studies, especially in the global south. Sharma et al. (2019) also stated that there exists a huge gap in the literature on such topics. Of particular concern are cities located in developing nations with arid or semiarid climate conditions that are already experiencing very hot and dry summers and, due to low adaptive capacities, are more vulnerable to changing climate. All of these factors constitute a strong justification for this book. In addition, books already on the market do not present a comprehensive review of the full innovative range of eco-efficient materials capable of mitigating UHI effects and meeting building cooling needs. Some publications contain almost no information on cool pavements and others are deficient on subjects such as switchable glazing-based materials (Santamouris, 2019) . This book, however, has a balanced coverage of eco-efficient materials for pavements, façades, and roofs, with a section especially for phase change materials (PCMs) and switchable glazing-based materials. With special contributions from a team of international experts, this book provides an updated state of the art on eco-efficient materials for reducing cooling needs in buildings and other construction.
This book provides an updated state-of-the-art review of eco-efficient materials to reduce cooling needs in building and construction.
Part One encompasses an overview of pavements for mitigation of urban heat island effects (Chapters 2-4) .
Chapter 2 covers particular applications aimed at mitigating the heat-related concerns using high albedo materials. First, albedo significance and relevance and its usefulness for effectively producing thermally optimized pavements are analyzed. After a short contextualization, the physics of albedo is discussed, in order to better understand its theoretical meaning and function in determining the radiative properties of materials that affect the general energetic balance of pavement surfaces. Highalbedo paving solutions are described, giving some details on constituent materials, chromatic characteristics, and surface properties. Some general issues regarding the overall benefits and drawbacks involved with the utilization of high-albedo pavement materials are debated.
Chapter 3 introduces three-component organic reversible thermochromic microcapsules, including their classification, merits, components, structure, thermochromic mechanism, and thermal and optical properties. The performance of thermochromic asphalt binders is illustrated, covering physical, optical, thermal, rheological, and antiaging properties, and the adjustment of thermochromic asphalt temperature. Finally, some proposals for future research on thermochromic asphalt are given.
Chapter 4 reviews pavements developed to mitigate urban heat islands. This includes information on methods to quantify surface temperature and heat transfer from pavement to urban temperature. Furthermore, the impacts of pavement temperature on mechanical performances are presented. Special attention is given to porous pavement, PCM pavement, and hydronic pavement.
Façade materials for reducing building cooling needs are the subject of Part Two (Chapters 5-9).
Chapter 5 presents a revised radiation apportionment model for estimating the benefits of shading against short-wave radiation as cooling-load reduction. Adhering to the principles of an improved radiative transfer model, field data harvested from net radiometers were input into a Microsoft Excel spreadsheet. The Solver function determined the radiative properties of various layers of a windowed building envelope, featuring a climber green wall.
Chapter 6 investigates the potentials of different geometrical brick patterns and their behavior on self-shading potential to reduce mean surface temperature of solar exposed brick walls. The study was shaped in two layers, including field measurements for geometric behavior and evaporative cooling potential. The results are discussed and compared for three configurations of solid, extruded, and perforated under varying boundary conditions over a day.
Chapter 7 presents an innovative low-energy, low-tech, and low-cost cooling system for buildings. This cooling system simultaneously makes use of three available heat sinks: the ground, evaporation of water, and radiation to the sky. A terra-cotta tank is placed along a northern wall of the building to achieve the two last phenomena. The chapter includes a case study simulation on a 100-m 2 house in Bordeaux climatic conditions.
Chapter 8 refers to a retrofitting case study of an office building using hemp-based plaster and passive cooling techniques. An economic simulation is also included.
Chapter 9 concerns the evaluation of thermal response of advanced glass façade structures with improved energy performance during the cooling period of the buildings. Two types of glazed façade structures were analyzed: 6-pane glazing structure upgraded in a BIPV system with and without PCM inserts, and BIPV double glazed façade with a transparent triple-glazed interior structure. The diurnal transient thermal response of constructions under investigation was evaluated using a CFD technique for extreme daily summer climate conditions for Athens, Ljubljana, and Stockholm.
Part Three (Chapters 10-12) deals with roofing materials for reducing building cooling needs.
Chapter 10 reviews the importance of green roofs for UHI mitigation. It includes design options and green roof modeling. The chapter also includes a description of an experimental set-up with extensive green roofs, including its cooling energy savings.
Chapter 11 discusses the thermal performance of building roofs with conventional and reflective coatings. A numerical model of a building roof validated with experimental data is included.
Chapter 12 focuses on utilization of active and passive cool roof systems to enhance the comfort of building occupants with attic temperature reduction. A case study including a thermal reflective coating, MAC-solar powered fan, and a rainwater harvesting system is analyzed.
Part Four concerns PCMs and switchable glazing-based materials for reducing cooling needs (Chapters 13-18).
Chapter 13 contains a short review of recent developments concerning biobased phase change materials for cooling in buildings.
Chapter 14 deals with PCM selection (mapping), based on its thermophysical properties and climatic parameters, which are location specific, followed by technology for PCM incorporation within building components. It also provides a comprehensive review of studies carried out so far in terms of energy savings through PCM incorporation within buildings.
Chapter 15 provides a state-of-the-art review of novel PCM-based strategies for building cooling performance enhancement. The investigated strategies include PCM integrated forms (such as distributed and coupled systems) and combined strategies (such as high-reflective coating, radiative cooling wall, and hybrid ventilations). Solutions for system performance enhancement of novel PCM-based cooling systems are comprehensively presented.
Chapter 16 reviews optically smart thin materials, such as thermochromic and electrochromic coatings, including all the mechanisms that manage their optical smartness. Classical and new methods used to fabricate these materials are detailed. Applications of these materials, such as smart thermal building insulators, and their impact on cooling and heating energy consumption are discussed.
Chapter 17 focuses on the most promising innovative solutions, in particular those whose strong dynamic and adaptable behaviors are able to tailor building energy needs and to optimize their performance and indoor-outdoor functionality. In this chapter, thermochromic materials are presented together with their major potentialities and limitations. Finally, their effect on building energy efficiency is assessed, with particular focus on the existing applications at the single building and urban scales.
Chapter 18 closes Part Four with an overview of thermochromic glazing products, considering their thermo-optical properties and technological integration. This chapter also includes a quantitative evaluation of their whole building performance. Total energy use and visual comfort aspects are discussed for a typical sun-oriented office building in three different climates, in order to provide an overview of their potential performance improvements as compared to traditional static glazing technologies.
|
Incidence of sarcomas of the bone is about 8 to 9 per million population per year and contribute to about 1%of all the cancers diagnosed worldwide [1, 2] . Although a rare entity in Orthopaedic practice, appropriate management is paramount for these malignancies, considering the aggressive nature of few pathologies. Primary bone sarcomas such as Osteosarcoma, Ewing's sarcoma, and Chondrosarcoma can metastasize to lungs, bones, or bone marrow. A delay in the diagnosis and initiation of the appropriate treatment can be detrimental to the overall survival and prognosis of the patient.
In the current pandemic, COVID-19 cases are increasing at a tremendous rate world over and more than 2.3 lakhs persons have tested positive for COVID-19 in India itself, until the filing of this commentary [3] . The present health crisis is exerting tremendous pressure on the existing healthcare system of a developing nation like ours. Certain orthopaedic surgeries in situations like trauma, severe infections, and malignant tumours are certainly unavoidable which endanger the limb/life of the patients. In orthopaedic oncology, patients requiring surgeries for their sarcomas are also unavoidable beyond a certain period. Timing of the surgery and timely resumption of the chemotherapy post-surgery form an important part of sarcoma management and provide a good overall survival (OS) to such patients.
As part of the limb salvage surgeries-sometimes determining the status of the surgical margins-bony as well as This article is part of the Topical Collection on Covid-19 * Jagandeep Singh Virk [email protected] soft tissue is quintessential for the operating surgeon intraoperatively. This involves transportation of fresh unfixed surgical specimens to the pathology labs where these are subjected to frozen section examinations. The process of frozen sections itself generates a lot of aerosols in the pathology lab and potentially creates another hazardous zone apart from the operation theatre (OT), especially when dealing with samples in a COVID-19-positive case. A useful alternative for assessing the intra-operative surgical margins is imprint cytology (IC). The slides for cytological assessment can be prepared by the surgeon in the OT and examined by the pathologist in the OT complex itself. This circumvents the requirement for transportation of potentially hazardous tissue samples outside the OT complex and subjection to unnecessary aerosol-generating procedures like frozen sections. Many centres and hospitals still seem to be following the routine practice of obtaining frozen sections intra-operatively, despite these concerns due to lack of awareness regarding the same. Thus, better interaction between the orthopaedic surgeon and the pathologist and sensitization to such practices is the need of the hour in this COVID-19 crisis.
Intra-operative pathological consultation can be done by techniques such as frozen section and imprint cytology (touch imprint, crush, or scrape), each having its advantages and limitations. Both techniques are quick, provide reliable intraoperative provisional tissue diagnosis, and provide a reasonable assessment of the adequacy of surgical resection limits. This helps the surgeon to make an intra-operative decision regarding proceeding with further extension of surgical margins or not. The diagnostic yield, sensitivity, and specificity of both techniques for tumour detection in specimens or margins are almost comparable in several studies [4, 5] .
Direct imprints are prepared by pressing the microglass slide gently against the freshly cut surface of the specimen. Gliding movement is avoided. Smear is immediately fixed in alcoholbased fixatives and one may be left to air dry. These smears are stained by Toluidine blue, Diff quick stain, rapid haematoxylin and eosin (H&E), and May Grunwald Giemsa (MGG) [6] . The technique of preparing the glass slides can be easily understood in detail by the Orthopaedic Oncosurgeon from the pathologist. The marrow margin status from the cut ends of the bone to confirm appropriate resection length, as well as soft tissue margin status, can be obtained.
The technique of frozen section diagnosis has several disadvantages in the current COVID-19 crisis:
1. The main problem is the persistence of SARS-CoV-2 on inanimate surfaces such as cryostats; the virus can survive temperatures of − 20°C, the temperature used for cutting of fresh-frozen sections [7] . 2. The potential for extensive aerosol creation during the processing of the specimen. The cutting blade itself poses the highest risk of surface contamination and is required to be handled with extreme care. 3. Ideally, frozen sections have to be performed in a separate area away from the main laboratory workflow to minimise exposure risk to other staff. 4. Strict adherence to cryostat decontamination protocols has to be followed which takes a long-time and many laboratories have only one cryostat machine available for frozen sections.
Besides these disadvantages, the pathologist and their technicians have to work in proper PPE gear and wear protective eye gear and N95 face masks while preparing frozen sections. Many times, fogging over the protective face shield or the eye-
Limb salvage, wherever possible, has become the modality of choice for surgical treatment of primary malignant bone sarcomas. The success of a limb salvage surgery is heavily dependent on the quality of our surgical margins [9] . The resection of high-grade sarcomas is performed applying all the principles of sarcoma management and determining the quality of our surgical margins by the barrier concept as elucidated by Kawaguchi et al. [10] . Assessment of the extent of our surgical resection-bony/soft tissue-is usually planned preoperatively on a recent good-quality contrast-enhanced MRI with fine cuts and all sequences, CT, and radiographs. At times, use of intra-operative navigation system especially in resections around the pelvis is beneficial for confirming our bony resection margins. The availability of the same is a constraint at most centres.
In most situations, these tumours also have a soft tissue component, and confirming negative resection margins by the way of frozen sections is a standard protocol at most centres. This is usually imperative when our resection limits are very close to the tumour or the bony osteotomies are placed extremely close to the tumour as in intercalary resections while preserving the patient's joints (Fig. 1) .
A method like imprint cytology which is accurate, quick, and safe in the current COVID-19 scenario for judging the resection margins status intra-operatively in oncosurgeries is warranted. The role of imprint cytology in other subspecialities of oncosurgery has also been widely reported. In a retrospective study of 100 cases of breast lumpectomy specimens, which underwent intra-operative imprint cytology for studying margin status, also concluded imprint cytology to be a reliable diagnostic tool for the evaluation of the status of lumpectomy margins in breast cancer patients. Studying the margins for any microscopic residual foci of tumour cells forms an important part of breast-conservation surgery. This study conducted at the Department of Pathology at University of Florida found intra-operative imprint cytology to have a sensitivity of 97%, specificity of 99%, with positive predictive value (PPV) of 84%, and negative predictive value (NPV) of 99% [11] . Another study on head and neck cancers, which constitutes one of the most commonly encountered cancers especially in South East Asia, concluded imprint cytology to be a reliable investigation for specific diagnoses and an adjunct to histopathology, particularly in developing countries. The overall diagnostic accuracy of imprint cytology in this study was 96.7% with a sensitivity and specificity of 96 and 100%, respectively while PPV and NPV were found to be 100 and 84%, respectively [12] . Inputs from other specialities have also highlighted the potential hazards associated with the use of frozen sections for intra-operative assessments in the present health crisis. The American Association of Ophthalmic Oncologists and Pathologists (AAOOP) in their Ocular Pathology recommendations during COVID-19 have also discouraged frozen section examinations and encouraged submission of fresh fixed specimens for imprint cytology [13] . Pathologists from some associations like the European Association of Urology have categorically asked the surgeons to refrain from the routine practice of asking for frozen sections, unless necessary. The guidelines from this association label the SARS-COV2 as extremely contagious and lethal and advice both pathologists and laboratory workers to be extremely cautious when handling fresh urological specimens [14] . Taking insights into other sub-specialities of oncosurgery, orthopaedic oncosurgery also needs to inculcate use of imprint cytology over frozen sections for studying surgical margins wherever feasible in the current health crisis.
Although COVID-19 testing is to be performed for suspected cases, the risk of cross-transmission in a hospital setting also catering to treatment of COVID-19-positive patients is also a potential problem. The hospitals, its OT's, and laboratories are potential hot zones for contracting COVID-19 infection. In such a scenario, our health system must act immediately, wisely, and support essential surgical care while protecting patients and staff and conserve valuable resources. Use of procedures like imprint cytology in the current scenario will not only conserve resources but also protect our pathology colleagues and labs from unnecessary aerosol-generating procedures like frozen sections for intra-operative assessment of surgical margins for sarcomas.
Authors Contribution JSV, PB, RS-To conception and design of commentary JSV, HM-Drafting the article or revising it critically for important intellectual content All authors read and approved the final manuscript.
Conflict of Interest The authors declare that they have no competing interests.
Informed Consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal, the patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
|
On March 11, 2020, the World Health Organization (WHO) announces an unfortunate pandemic status of Coronavirus disease of 2019 . According to the Chinese government official report to the WHO, the first case was on December 8, 2019 (TheGuardian.com, 2020) . As the pandemic epicenter, China transmits shocks to the financial and non-financial firms in G7 countries (Akhtaruzzaman, Boubaker, & Sensoy, 2020 ) and even to brands carrying 'Corona' name such as Corona beer (Corbet, Hou, Hu, Lucey, & Oxley, 2020) . The WHO announcement has sent financial markets worldwide into tailspins, due to the predicted global economic recessions in years to come. One day after the declaration, the S&P500, FTSE-100, Nikkei-225 all plunge about 9.51%, 10.87%, and 4.41%. In the same period, gold as a safe-haven (Baur & Lucey, 2010) also drops, but only about 3.53%.
Before the cryptocurrency era, a strand of literature has documented the properties of safe-haven assets. For instance, Baur & Lucey (2010) state that an asset is a safe-haven if it is uncorrelated with stocks during a market crash. Therefore, gold is considered a safe-haven during an extreme stock market downturn. Sandoval & Franca (2012) also agree that assets that are uncorrelated with stocks are prospective safe-havens. The characteristic is important because, during the significant financial crisis such as 1987 (Black Monday), 1998 (Russian crisis), 2001 (The dot-com bubble and 911), and 2008 (GFC), financial markets tend to be highly interrelated with one another.
Since its inception, the cryptocurrencies market has grown tremendously. As the pioneer, Bitcoin has increased in value from nearly $0 in October 2009 to more than $7,000 in April 2020 (CoinMarketCap.com, 2020) . Chan et al. (2019) state that the dramatic Bitcoin price increase in December 2017 is pivotal to determine its hedging abilities. According to Bouri, Molnár, Azzi, Roubaud, & Hagfors (2017) , an asset is a weak (strong) hedge if it is uncorrelated (negatively correlated) with another asset on average. An asset is a weak (strong) safe-haven if it is uncorrelated (negatively correlated) with another asset during distress times.
Can Bitcoin be a safe-haven for stocks? Smales (2019) argues against it because of Bitcoin's high volatility, illiquidity, and transaction cost. Chaim & Laurini (2019) also point out the potential bubble in Bitcoin, albeit it is more probable for the period before December 2017 (Geuder, Kinateder, & Wagner, 2019) . During the COVID-19 market downturn, Conlon & McGee (2020) state that Bitcoin is not a safe-haven since its price moves closely with S&P500. Bitcoin is not even a diversifier but an amplifier of contagion (Corbet, Larkin, & Lucey, 2020) .
In contrast, Dyhrberg (2016) points out the possibility of using Bitcoin as a hedging instrument. Bitcoin can even be a safe-haven, but its role depends on the stock market types, time horizons, and investment horizons (Bouri et al., 2017; Shahzad, Bouri, Roubaud, Kristoufek, & Lucey, 2019; Stensås, Nygaard, Kyaw, & Treepongkaruna, 2019) . Gil-Alana, Abakah, & Rojo (2020) profess that cryptocurrencies are different from traditional financial and economic assets, and investors should include them to diversify their portfolios. Moreover, Bitcoin's safe-haven properties are even better than gold and commodities (Bouri, Shahzad, Roubaud, Kristoufek, & Lucey, 2020) .
The COVID-19 pandemic is the first global health that translates into economic shock since the GFC 2008 and Bitcoin's inauguration in 2009. The event provides a background to investigate whether Bitcoin exhibits short-term safe-haven features for stocks. We also investigate Ethereum because it is the second-largest cryptocurrency that may also show safehaven properties (Beneki, Koulis, Kyriazis, & Papadamou, 2019; . We choose the US market because it is the largest market, and coincidentally, the US has the highest number of COVID-19 infections (to proxy for the most significant distress) in the world. In this study, we use the term coins and cryptocurrencies interchangeably.
We find that both Bitcoin and Ethereum are suitable as short-term safe-havens during the extreme stock market plunges. We also learn that Ethereum is plausibly a better safehaven than Bitcoin during the pandemic. However, we also uncover that before and during the pandemic, Ethereum exhibits the highest daily return volatility, followed by Bitcoin, S&P500, and gold.
We collect the Bitcoin (BTC) and Ethereum (ETH) data from coindesk.com, while the S&P500 and gold spot prices data from DataStream. To control Bitcoin halving's potential impact on May 12, 2020 (Crawley, 2020) , we deliberately utilize a short-term observation window from July 1, 2019, until April 6, 2020.
Following previous studies (see, for example, Akhtaruzzaman et al., 2020; Bouri et al., 2017; , we utilize the DCC-GARCH methodology (Engle, 2002) to examine the dynamic correlation of cryptocurrency, gold, and S&P500. Bouri et al. (2017) suggest that a weak (strong) safe-haven asset is uncorrelated (negatively correlated) with another asset during times of stress.
We select the mean equation based on the information criteria 1 and find that the MA (1) process is the most suitable specification for our DCC-GARCH (1,1) model, as presented in Eq. (1).
(1)
Whereas is a vector of Bitcoin, Ethereum, gold, and S&P500 daily returns, is the conditional mean vector of , and is the vector of residuals. Meanwhile, the variance equation follows:
(2)
Where is the conditional variance, c is the constant, is the parameter that captures the short-run persistence or the ARCH effect, and b represents the long-run volatility persistence or the GARCH effect.
The DCC-GARCH (1,1) equation is then given by Q t , which is the square positive definitive matrix as in Eq. (3).
Where is the time-varying unconditional correlation matrix of is a vector of standardized residuals from the first-step estimation of the GARCH (1,1) process, and α and β are parameters quantifying the effects of previous shocks and previous DCCs on the current DCC. To investigate whether the correlations are dynamic, we perform the Wald test. The Wald test suggests that the correlations are indeed dynamic since α (at one percent) and β (at ten percent) are statistically different from zero. Also, the sum of α and β is less than unity 2 .
The DCC between assets i and j is then calculated as in Eq. (4):
Following Aielli (2013), we also estimate the corrected-DCC (cDCC) and compare the outcomes with the DCC results as a robustness test.
After investigating the dynamic correlations, we also adopt the method of Baur et al. (2018) and run OLS regressions with Newey-West robust estimator, as presented in Eq. (5). ********** Where **is the cryptocurrency (Bitcoin or Ethereum) return at day-t, Gold t is gold return at day-t, Stock t is stock return at day-t, and Covid19 is a dummy variable that equals one if day-t is on the pandemic announcement date (March 11, 2020) or the subsequent days. If the cryptocurrency serves as a safe-haven in the pandemic, then the coefficient of β 1 is expected to be positive, while the coefficient of β 3 is negative (Baur et al., 2018) .
Based on Table 1 , we learn that volatility inclines to increase during the pandemic. Before (during) the pandemic, the daily return standard deviations of Bitcoin, Ethereum, gold, and the S&P500 are 3.44% (9.11%), 4.34% (10.96%), 0.89% (2.19%), and 1.27% (6.07%), correspondingly. The increase in volatility is also visible from the return plot in Figure 1 . All returns throughout the pandemic are more volatile than before the pandemic.
[Insert Table 1] [Insert Figure 1 ] Table 2 demonstrates that the pairwise correlations between gold and both coins tend to increase during the pandemic. Meanwhile, the correlations between the S&P500 and both coins turn negative. The correlation between S&P500 and Bitcoin (Ethereum) is -0.3790 (-03757). These are the initial signs that both cryptocurrencies are potential safe-havens for stocks. To inquire whether Bitcoin halving may affect this study's result, we compare Bitcoin and Ethereum returns. We learn that their correlation before (during) the pandemic is 0.8306 (0.9841). Since Ethereum does not face halving, the high correlation indicates that Bitcoin halving will not significantly impact this study's result.
[Insert Table 2 ]
The S&P500 and gold dynamic correlations (Figure 2 (A) ) before the pandemic are always negative between -0.3801 and -0.1479, with a median of -0.2909. During the pandemic, the correlations tend to be less negative, with a median of -0.1800. The S&P500 and Bitcoin dynamic correlations (Figure 2 (B) ) before the pandemic are not always negative. The correlations vary between -0.0713 and 0.1007, with a median of -0.0047. However, they incline to become more negative during the pandemic, with a median of -0.0393. Hence, Bitcoin is a prospective safe-haven for stocks.
[Insert Figure 2] Before the pandemic, the S&P500 and Ethereum dynamic correlations (Figure 2 (C)) are often negative between -0.1259 and 0.1180, with a median of -0.0580. During the pandemic, the correlations still tend to be negative, with a median of -0.0499. Ethereum might be a better safe-haven than Bitcoin for three reasons. Firstly, for the whole period, the median correlation between Ethereum and S&P500 (-0.0570) is lower than the median correlation between Bitcoin and S&P500 (-0.0066). Secondly, different from Bitcoin and gold 3 , Ethereum and gold dynamic correlations (Figure 2 (D) ) are always positive even before the pandemic, with a median of 0.1382. The correlations tend to increase during the pandemic, with a median of 0.1754. Finally, during the pandemic, the Ethereum and gold median correlation (0.1754) is higher than Bitcoin and gold (0.1466).
[Insert Figure 3] As a robustness check, we also estimate the corrected-DCC (cDCC) (Aielli, 2013) and superimpose the dynamic correlations on the DCC plot ( Figure 3) . Figure 3 shows the alignment between cDCC and DCC results. Both Bitcoin and Ethereum exhibit safe-haven traits because their returns tend to correlate with S&P500 negatively. The entire period median correlation between S&P500 and Ethereum (Bitcoin) is -0.0545 (-0.0085). Comparable to DCC, Ethereum is potentially a better safe-haven than Bitcoin because of three reasons. First, the median correlation of Ethereum and S&P500 is more negative than Bitcoin and S&P500 (-0.0545 vs. -0.0085). Second, different from Bitcoin and gold 4 , the dynamic correlations between Ethereum and gold are always positive, with a median before (during) the pandemic of 0.1364 (0.1818) (Figure 3 (D) ). Third, in the pandemic, Ethereum and gold are more positively correlated, with a median of 0.1818 than Bitcoin and gold, with a median of 0.1552.
We further investigate the safe-haven properties of Bitcoin and Ethereum during the COVID-19 pandemic by utilizing regressions as specified in Eq. (5). If a coin is a potential safe-haven, then the interaction between Covid19*Gold t (β 1 ) should be positive while the interaction between Covid19*Stock t (β 3 ) should be negative. In other words, during the pandemic, a safe-haven return should be positively associated with the gold return while negatively correlated with the stock return.
[Insert Table 3] The results for Bitcoin are in Table 3 (A). We use three different scenarios based on the number of days in the pandemic: 7, 10, and 14 days. Based on the results, we learn that Bitcoin displays safe-haven characteristics. In all three scenarios, Bitcoin return is positively associated with gold return and negatively interrelated with stock return. The Bitcoin findings are in line with Gil-Alana et al. (2020) and Stensås et al. (2019) but different from Conlon & McGee (2020) and , who profess that Bitcoin is an imperfect hedge during COVID-19 pandemic.
We also find similar results for Ethereum, as presented in Table 3 (B) . For all 7, 10, and 14 days in the pandemic scenarios, we observe that Ethereum return correlates positively with the gold return but inversely correlated with stock return. Ethereum is plausibly a better safehaven than Bitcoin since, in all scenarios, the β 1 and β 3 of Ethereum are consistently larger than Bitcoin. The Ethereum results are, to some extent, different from those of , who find that Ethereum is not a safe-haven for the US aggregate stocks.
We have also investigated FTSE-100 and find that Bitcoin and Ethereum coefficients are all as expected, but they are significant only for the 7-day settings 5 . The overall regression results support the notion that Bitcoin and Ethereum exhibit safe-haven qualities for stocks. However, we are also cognizant that both coins exhibit daily return volatilities higher than gold and stocks (Table 1) . To alleviate the volatility problems, Baur & Hoang (2020) advise adding a stablecoin such as Tether, which acts as a safe-haven for both coins. We have also added Tether to the regressions, and the results still hold, except for the 10-day scenario 6 .
Based on the WHO COVID-19 pandemic proclamation on March 11, 2020, we test the Bitcoin and Ethereum as safe-havens for stocks. Our dynamic correlations and regressions results show that Bitcoin and Ethereum, as the two major cryptocurrencies, display short-term safe-haven characteristics for stocks. Moreover, we learn that Ethereum might be a better safe-haven than Bitcoin during a short extreme stock market downturn, but Ethereum exhibits higher return volatility than Bitcoin. Our results are in line with Gil-Alana et al. (2020) and Stensås et al. (2019) but are different from , Conlon & McGee (2020) and . The difference may arise because we focus on the short-term safe-haven properties and use a relatively shorter observation window.
Although both cryptocurrencies exhibit safe-havens features, we realize that their volatilities are higher than gold and S&P500. Before (during) the pandemic daily return volatilities of Bitcoin, Ethereum, gold, and S&P500 are 3.44% (9.11%), 4.34% (10.96%), 0.89% (2.19%), and 1.27% (6.07%), respectively. We are mindful that incorporating coins into a portfolio may not be easy due to the high transaction cost and illiquidity (Smales, 2019) . Nevertheless, we hope that with additional future regulations, the coins' volatility could be lower. The regulations should increase market information availability and hinge on the fact that cryptocurrencies are different from the existing asset classes such as gold, commodities, or stocks (Gil-Alana et al., 2020; Yu, Kang, & Park, 2019) . We also recognize that the coins' safe-haven characteristics are reliant on market conditions and investment horizons as described in prior studies (Bouri et al., 2017; Shahzad et al., 2020 Shahzad et al., , 2019 Stensås et al., 2019) . ****** **is cryptocurrency (Bitcoin or Ethereum) return at day-t. Gold t is gold return at day-t, Stock t is stock return at day-t, and Covid19 is a dummy variable equals to one if day-t is on the pandemic announcement data (March 11, 2020) or the subsequent days. If the cryptocurrency serves as a safe-haven, then β 1 (β 3 ) is expected to be positive (negative). 193 193 193 193 193 193
|
Ocular surface squamous neoplasia (OSSN) includes a variety of dysplastic changes of the conjunctiva and cornea, ranging from benign dysplasia to carcinoma in situ to invasive squamous cell carcinoma [1, 2] . Risk factors include ultraviolet (UV) light, immunosuppression, human immunodeficiency virus (HIV), human papillomavirus (HPV), mutations of p53, and older age [1] [2] [3] . Patients typically complain of redness, foreign body sensation, and growth on the ocular surface [1] .
Treatment of OSSN includes a variety of options and even combinations of therapy, such as excisional biopsy, cryotherapy, or topical chemotherapy [4, 5] . Personalizing treatment requires evaluation of not just the medical aspects of the condition but also the social needs of the patient. For example, the use of primary or adjunctive topical chemotherapy may not be ideal for an undocumented and uninsured New York City (NYC) patient. The potentially high out-of-pocket costs for topical chemotherapy, lost work time for follow-up visits, the cost of the office visits, and even transportation costs to the visits, may limit a patient's ability to comply with topical chemotherapy regimens. With these factors in mind, we report the first documented selection of this cost-effective treatment of OSSN (the use of absolute ethanol along the corneal margin, primary excision, double freeze-thaw cryopexy, and primary conjunctival closure) for an undocumented and uninsured NYC patient.
A 35-year-old man from Ecuador presented to a NYC emergency department due to worsening discomfort of a long-standing left eye pterygium. He denied changes in vision and discharge to both eyes. Further history revealed he is an undocumented and uninsured outdoor day-laborer from Ecuador and former tobacco smoker (he quit 8 years ago, two to three cigarettes per day). He denied significant past medical history, surgical history, and family history of malignancy. Physical examinations were unremarkable apart from the eye lesion. Documented serological HIV testing was performed and confirmed that he is HIV-negative.
His visual acuity without correction was 20/20 in the right eye and 20/25 in the left eye. His pupils were 5-2 mm with no apparent pupillary defect in both eyes. Extraocular muscles were intact in both eyes. A slitlamp examination of the right eye was unremarkable. A slit-lamp examination of the left eye ( Fig. 1a ) demonstrated a 6 × 8 mm elevated flesh-like mass, 2+ injection, and lobulated extensions of the conjunctival mass encroaching the cornea (3 mm superiorly and 6 mm inferiorly on the cornea). Tonometry applanation was within normal limits: 16 mmHg right eye and 15 mmHg left eye at 07:20 a.m. Dilated funduscopic examination of both eyes was unremarkable.
Surgical excision with adjunctive absolute alcohol with additive double freeze-thaw cryopexy was performed. Using a "no-touch" technique, the clinical boundaries of the tumor were outlined by adding approximately 4 mm margin of clinically normal tissue on the superior, inferior, and temporal margins [6] . The conjunctiva was marked with a cautery, then absolute alcohol-soaked Weck-Cels® were placed for 10 seconds each along the temporal, superior, inferior, and nasal margins of the tumor which extended into clear cornea (Shields C.L., The number of seconds for the application of the alcohol-soaked Weck-Cel, personal email correspondence on 26 July 2020) [6] . Once the epithelium was loose and using a Weck-Cel, the epithelial portion of the tumor was removed without violating Bowman's membrane [6] . Conjunctiva extension was then excised starting from the caruncle and undermining it toward the limbus with the removal of the tumor in a single block. The specimen was marked and sent for pathology. Cautery was applied to the base, which was followed by Weck-Cel soaked with absolute alcohol applied to the base [6] . A No. 57 Beaver® blade was then used to scrape away all limbal cells and all remaining limbal tissue [6] . An additional application of a Weck-Cel soaked in absolute alcohol was applied [6] . The conjunctiva was undermined and then closed with interrupted 6 × 7-0 Vicryl sutures after double freeze-thaw conjunctival cryopexy had been applied along the entire perimeter [6] . A collagen shield soaked in an antibiotic was placed on the left eye and cycloplegic drops were placed in the left eye. A patch and shield were placed over the left eye. No topical chemotherapy was used.
Histological diagnosis confirmed squamous cell carcinoma in situ arising from pterygium ( Fig. 2A, B) . The underlying subepithelium showed elastotic degeneration ( Fig. 2A) and moderate accompanying chronic inflammation. Nasal, superior, and inferior conjunctival Fig. 1 Clinical appearance of the left eye. a At presentation, a triangular-shaped tissue that pulled toward the cornea showed a 6 × 8 mm elevated flesh-like mass, 2+ injection, and lobulated gelatinous extensions of the conjunctival mass encroaching the cornea (3 mm superiorly and 6 mm inferiorly on the cornea). Absolute ethanol was used intraoperatively along the nasal corneal margin prior to incision. b Postoperative day 1 showed expected conjunctival chemosis and surgical clearance of the gelatinous material over the cornea. c Postoperative year 2 showed 1+ conjunctival injection, trace conjunctival scarring, and no tumor recurrence margins were negative for malignancy, while the corneal margin was involved by carcinoma. Immunohistochemistry was performed to test for HPV p16 and in situ hybridization was performed to test for HPV 6/11 and HPV 16/18. The specimen was HPV-negative.
He was discharged on the same day of the procedure. He was followed up on postoperative day 1 at an out-patient office (Fig. 1b) . A slit-lamp examination on postoperative day 1 showed expected conjunctival chemosis and surgical clearance of the gelatinous material over the cornea. He was provided with medication samples of an antibiotic and corticosteroid ophthalmic suspension during this visit. Follow-up visits on postoperative week 1 and postoperative month 1 demonstrated 3+ conjunctival injection, expected corneal epithelial defect, and sutures intact. The medication samples were tapered after postoperative month 1. Postoperative month 3 showed 1+ conjunctival injection, clear cornea, sutures intact, and no recurrence of tumor. Postoperative month 7 showed 1+ conjunctival injection, trace conjunctival scarring, and clear cornea. Postoperative year 2 showed trace conjunctival scarring (Fig. 1c) . He has remained free of tumor recurrence at year 2 postoperative visit.
OSSN includes a variety of dysplastic changes in the conjunctiva and cornea. Risk factors include UV light, immunosuppression, HIV, HPV, mutations of p53, and older age [1] [2] [3] . The gold standard for diagnosis of OSSN is histology, while Rose Bengal, lissamine green, and methylene blue stains are non-invasive methods of clinically identifying suspicious OSSN lesions [7] [8] [9] . Treatment options include excisional biopsy, cryotherapy, and topical chemotherapy [4, 5] . To the best of our knowledge, this is the first documented selection of this cost-effective treatment of OSSN (the use of absolute ethanol along the corneal margin, primary excision, double freeze-thaw cryopexy, and primary conjunctival closure) for an undocumented and uninsured NYC patient.
UV light is known to cause DNA damage and create pyrimidine dimers [1] . Newton et al. studied the effects of ambient UV light on the incidence of ocular squamous cell carcinoma [10] . They found that in each 10degree increase in latitude, the incidence of ocular squamous cell carcinoma decreased by 49% [10] . For comparison, Uganda had more than 12 cases/million per year versus the UK's < 0.2 cases/million per year [10] . Our patient was an outdoor day-laborer from Ecuador, and UV light may have been a contributing cause of his OSSN.
Previous studies have shown that patients with OSSN younger than 50 years of age should warrant HIV testing [3] . Mahomed and Chetty reported that 12 out of 17 (70.6%) South African patients with OSSN tested for HIV had positive results and the 12 patients who were HIV-positive were all under the age of 50 years [3] . Karcioglu and Wagoner noted that OSSN in younger African patients (age range, 32-37 years) was strongly associated with HIV-positive status [11] . Pradeep et al. found six out of 21 (28.6%) patients with OSSN to be HIV-positive in South India [12] . Among patients who were HIV-positive in South Africa, the median age of presentation was 36 years, while patients who were HIVnegative had a median age of 54 years [12] . Guech-Ongey et al. found 15 US patients who were HIVpositive with squamous cell carcinoma of the conjunctiva; out of the 15, 10 (67%) were under the age of 50 years relative to acquired immune deficiency syndrome (AIDS) onset [13] . In the USA, the mean age of presentation of OSSN is 64 years [14, 15] . Our case report presents an OSSN in a 35-year-old man in the USA with documented serological HIV-negative test results and demonstrates that a young US man who is HIV-negative may still have a possible malignant ocular surface lesion.
HPV and its correlation to OSSN has been discordant. HPV incidence rate in OSSN can range from 0 to 88.1% [16] [17] [18] [19] [20] [21] [22] . Certain studies have noted HPV, specifically high-risk HPV type 16, to be highly correlated with OSSN [16] . Carrilho [15, 23] . Another study considered not only tobacco smoke but also household smoke from cooking fires; neither factor increased the risk of OSSN [24] . Although our patient was a former cigarette smoker and smoked 2-3 cigarettes per day, it was unlikely that his OSSN developed due to cigarette smoking.
Mutations or deletions of p53 have been associated with OSSN specimens [3] . p53, a tumor suppressor protein, is located on chromosome 17p13 and causes cell cycle arrest at G1-S checkpoint [25] . Mutations or deletions of p53 aid genomic instability and allow the proliferation of carcinogenesis [26] . Other risk factors, such as HPV and UV light, synergistically affect OSSN proliferation [27, 28] . Mahomed and Chetty found 28 out of 40 (70%) lesions to contain positive staining for p53 in neoplastic cells [3] . Our patient's specimen did not receive immunostaining for p53 due to the diagnostic test not being routinely performed for a clinical squamous cell carcinoma in situ specimen.
The intraocular spread of OSSN and metastasis may occur but are uncommon [1] . Tumor cells can enter through the limbus and invade the Schlemm's canal, the trabecular meshwork, the anterior chamber, the suprachoroidal space, and the uvea [1] . The invasion may cause inflammation, iritis, glaucoma, retinal detachment, and scleral thinning [1] . Although OSSN metastasis rate is < 1%, physical examinations of the preauricular/submandibular/cervical lymph nodes, parotid glands, lungs, and bones need to be conducted [1, 29] . Lee and Hirst noted that the major factor for the metastatic spread was a delay in seeking medical treatment [1] . With our patient's inconsistent follow-up with his previous ophthalmologists, it was important to perform a thorough neck examination. Our patient's slit-lamp examination (aside from the OSSN), intraocular pressure, and dilated funduscopic examination were unremarkable. He had no clinical evidence of anterior cervical and posterior cervical lymphadenopathy. A physical examination of his neck and musculoskeletal system were unremarkable. Thoracic and abdominal physical examinations were within normal limits, and a preoperative chest X-ray demonstrated only one calcified granuloma in the upper lobe of his right lung. Since the rate of OSSN metastasis is < 1% and unlikely in this case, based on his physical examination, no additional testing was performed; however, periodic clinical follow-up was recommended to our patient.
The standard of care for OSSN is currently evolving. Numerous studies have tested topical chemotherapy, such as 5-fluorouracil (5-FU), mitomycin C (MMC), and interferon α-2b (IFNα2b), for OSSN [5, 30] . Joag et al. identified that 82% of cases of OSSN had a complete response to 5-FU as the primary treatment without longterm complications [31] . Side effects of 5-FU include pain, tearing, photophobia, itching, swelling, and infection [31] . Shields et al. found that the recurrence rate of IFNα2b monotherapy showed a complete response of 75%, while IFNα2b with surgical excision of OSSN showed complete control in 95% of cases [32] . Besley et al. noted that 84.9% of patients with OSSN treated with only MMC did not have a recurrence [33] . The adverse effects of MMC include pain, epitheliopathy, allergic conjunctivitis, hyperemia, punctal stenosis, and ectropion [5] . Since IFNα2b has a minimal side effect profile in comparison to 5-FU and MMC, it would have been ideal to use IFNα2b as an adjuvant to surgical excision for our patient.
Although implementing topical chemotherapy would have been ideal and preferable to decrease the recurrence rate, socioeconomic factors and cost of treatment had to be considered. NYC is considered a sanctuary city in which undocumented immigrants are protected from detention if based only on immigration status [34] . NYC has two bills that reduce the presence of Immigration and Customs Enforcement (ICE) at Rikers Island and all City facilities [35] . With NYC being a common ground for at least 560,000 undocumented immigrants, approximately 200,000 of the undocumented NYC patients do not have access to health insurance [36, 37] . As many as 9.5% of those under the age of 65 do not have health insurance in NYC [38] . Hispanics, including persons from Mexico, Central America, South America, and the Caribbean, comprise the highest number of undocumented immigrants in the USA [39] . With NYC being a sanctuary city for undocumented immigrants, the inequality in health care access can be reflected in recent events, including the increased mortality rate of the Hispanic community due to COVID-19 in NYC [40] . Socioeconomic barriers to health care include lack of access to testing, increased co-morbidities due to decreased access to physicians, language barriers, unequal access to higher education, lack of transportation to appointments, lack of access to child care, and crowded housing [41] . Due to the limited access to primary care and specialist physicians, many undocumented and uninsured patients utilize the emergency department as their primary method of care [37] . Our patient, for example, required in-patient admission through the emergency department to obtain preoperative clearance for his OSSN excision.
In addition to using the emergency department as a method of preparing our patient for his OSSN surgery, out-of-pocket out-patient treatments (not covered by his in-patient hospital stay), such as cost of topical chemotherapy and future appointments, present as a challenge to the undocumented and uninsured. Al Bayyat et al. noted that out-of-pocket costs for IFNα2b can range between approximately $240 and $600 per month in the USA, while 5-FU and MMC cost $38-$75 and $100-$200 per bottle, respectively [5, 7] . Other socioeconomic factors, such as work schedules, payments for office visits, and transportation costs may limit a patient's ability to comply with topical chemotherapy regimens. Topical chemotherapy necessitates consistent follow-up from the patient due to its specific scheduled regimens. For example, IFNα2b requires the drops to be used four times a day until clinical resolution, 5-FU is used four times a day for a week followed by a 3-week break, and MMC is used four times a day for a week followed by 2-3 weeks off [5] . Although the number of follow-up appointments for the surgical approach and for the topical chemotherapy approach may be similar, topical chemotherapy was not an option for our patient due to the high out-of-pocket medication costs and previous poor appointment adherence. He had a history of poor appointment adherence with his previous ophthalmologists due to his inability to pay for office visits and transportation costs. He also worked as an undocumented outdoor day-laborer and was unable to take time off work for his out-patient appointments. His delayed presentation to the emergency department due to limited financial means (outdoor day-laborer and uninsured), inconsistent follow-up with his physicians, and high out-of-pocket medication costs did not make him an ideal topical chemotherapy candidate. Therefore, the most cost-effective and efficacious treatment of choice for our patient was the use of absolute ethanol along the corneal margin, primary excision, double freeze-thaw cryopexy, and primary conjunctival closure [6] .
In conclusion, NYC has a heterogeneous population with many undocumented and uninsured immigrants from equatorial areas that have a higher incidence of OSSN. Our patient's day-laborer status is typical of undocumented workers in NYC and other US areas that are highly populated with undocumented immigrants [42] . Our case is the first to document the use of absolute ethanol along the corneal margin, primary excision, double freeze-thaw cryopexy, and primary conjunctival closure, as the preferred cost-effective treatment of choice for an undocumented and uninsured NYC patient. Our patient has remained free of tumor recurrence at year 2 postoperative visit. While topical chemotherapy, with or without surgery, is an evolving therapeutic option, the costs and required follow-ups can be a barrier for many patients. Areas in the world with similar types of populations or treatment challenges may need to consider this approach as a primary treatment option.
|
Op 15 maart 2020 kondigde de Nederlandse overheid verregaande maatregelen af in verband met de COVID -19-pandemie. Centraal daarin stond de drastische beperking van sociale en fysieke contacten met andere mensen. Scholen en bedrijven werden gesloten; wie dat kon moest thuiswerken. Daarmee begon een lockdownperiode van 11 weken. Voor veel mensen, vooral voor mensen met een chronische aandoening, was dat een spannende en onzekere episode. Al snel werd duidelijk dat deze laatste groep een verhoogde kans had op een ernstig ziektebeloop en op overlijden. Sterven aan COVID-19 is een dramatische dood gebleken omdat dat zeer vaak gebeurt op een intensivecareafdeling onder omstandigheden die een normaal menselijk contact met verzorgenden en een afscheid van dierbaren onmogelijk maken. Op 11 mei en 1 juni 2020 zijn stapsgewijze versoepelingen van maatregelen van kracht geworden. Zo zijn op 1 juni restaurants en musea weer opengegaan, weliswaar onder voorwaarden van een maximum van 30 bezoekers en het bewaren van anderhalve meter afstand. Voor veel mensen met een chronische aandoening geeft dat nieuwe onzekerheid omtrent het al dan niet deelnemen aan het maatschappelijk verkeer, waaronder ook het hervatten van het werk.
In dit artikel bespreken we de casus van een medewerkster van een museum die extra kwetsbaar is als het om COVID-19 gaat. Bij zijn advisering van werkgever en werknemer komt de bedrijfsarts voor een lastig dilemma te staan. Om de voorliggende morele vraag te beantwoorden past de bedrijfsarts de methode van morele oordeelsvorming toe. 1 De essentie van deze methode bestaat hierin dat recht wordt gedaan aan alle bij de betreffende advisering betrokken personen. Morele oordeelsvorming is een methodisch onderzoek dat uit zeven stappen bestaat, waarbij de argumenten voor de verschillende handelingsopties, voor zover dat mogelijk is, worden benoemd als beginsel-of gevolgargumenten. Beginselargumenten verwijzen naar rechten van betrokken personen en wegen daarom zwaarder dan gevolgargumenten. Een aantal louter op feiten berustende argumenten kan niet nader worden geclassificeerd.
Allereerst presenteren de auteurs een geanonimiseerde casus uit de praktijk van één van hen. Aansluitend worden de zeven stappen van de methode morele oordeelsvorming doorlopen. Daarbij fungeert de inbrengende bedrijfsarts als subject. Hij of zij moet immers de uiteindelijke beslissing nemen. De visie van deze bedrijfsarts is niet automatisch identiek aan die van ieder van de auteurs. Mijn voorlopige keuze is alternatief A. Iedereen, ook mevrouw Van den Abeelen, moet zijn of haar steentje bijdragen bij het versoepelen van de lockdown, in dit geval om het museum op te starten. Het bezwaar van deze keuze is dat ik niet tegemoetkom aan de angst van de werknemer voor besmetting.
Van den Abeelen als werknemer, de museumdirecteur als werkgever, andere personeelsleden, de museumbezoekers, de bedrijfsarts.
De beslissing omtrent het advies dat werkgever en werknemer van mij vragen, ligt bij mij.
Heb ik aanvullende medische, juridische of situationele informatie nodig? Als bedrijfsarts ken ik Van den Abeelen al langere tijd vanwege eerdere perioden van langdurig verzuim: enkele malen wegens een exacerbatie van haar astma, en eenmaal vanwege een nieroperatie. Op grond van de eerste publicaties over risicogroepen en mortaliteit bij COVID-19 is bij haar de kans op longcomplicaties significant verhoogd. Juridisch lijkt de zaak duidelijk. Mevrouw van den Abeelen heeft momenteel geen gezondheidsklachten die haar arbeidsongeschikt maken voor haar werk in het museum. Zij is niet ongeschikt voor haar werk. Dat sluit overigens niet uit dat haar werk momenteel wel ongeschikt is voor haar. De werksituatie verdient mijn aandacht. Wat dat laatste betreft liggen de kaarten duidelijk. Van den Abeelen is als operationeel manager de spin in het web in het museum. Daarnaast is zij bij technische problemen, bijvoorbeeld van de klimaatregelingsinstallatie, de expert bij uitstek. Haar werk leent zich er niet voor om dat van huis uit te doen. Op internet zoek ik de COVID-19-maatregelen voor de museumbranche op. Die zijn door het museum nauwgezet toegepast. Ten aanzien van haar privéactiviteiten (tennissen, boodschappen doen) heeft Van het geval is binnen de museummuren. Zij tennist in enkelspel, alleen met haar eigen partner, en gaat thuis douchen. Boodschappen doet zij 's morgens vroeg en ook daarbij bewaart ze ruimschoots de anderhalve meter afstand tot anderen. In het museum wordt ze voortdurend benaderd en aangesproken door medewerkers en bezoekers.
In deze stap worden alle argumenten verzameld die een rol kunnen spelen in de morele afweging. De argumenten voor elk van beide alternatieven van de handeling verdienen even veel aandacht. Alle argumenten moeten worden toegelaten. Die aanpak voorkomt dat belangrijke of onwelgevallige argumenten al in een vroegtijdig stadium uit beeld verdwijnen. Er is dus een morele reden voor deze aanpak, namelijk dat zo de rechten, belangen en wensen van alle betrokkenen het beste kunnen worden meegewogen. Deze stap maakt het mogelijk om op een open manier over het morele onderzoek van gedachten te wisse-len. De clou zit hier in het -al dan niet gezamenlijk -zoeken naar een zo volledig mogelijk beeld van alle argumenten.
Stap 6: Benoem de argumenten als beginsel-of gevolgenargument. Weeg de argumenten. Benoem de belangrijkste argumenten voor en tegen. Neem de definitieve beslissing. In deze stap worden de argumenten gewogen en wordt een uiteindelijk oordeel gevormd. Het is daarbij behulpzaam om een onderscheid te maken tussen verschillende typen morele argumenten. Er zijn argumenten die te maken hebben met de rechten van betrokken personen en instellingen. Bij deze argumenten drukt een beginsel de verplichting uit waarmee aan het recht van de betrokkene tegemoet wordt gekomen. We spreken dan ook van beginselargumenten.
Er zijn daarnaast ook argumenten die betrekking hebben op belangen en wensen van betrokkenen. Belangen en wensen worden meestal verwoord als gevolgen die de beslissing voor de betrokkenen met zich meebrengt. Een
Doordat het museum afdoende beschermingsmaatregelen heeft getroffen, in overeenstemming met het protocol van de museumbranche, is de besmettingskans in het museum aanzienlijk verminderd.
Werknemer behoort duidelijk tot een kwetsbare groep. Het risico op ernstige gezondheidsschade en overlijden na besmetting is voor haar verhoogd. Het is mijn plicht als bedrijfsarts om mensen tegen het risico van gezondheidsschade door het werk te beschermen.
De bijdrage van alle personeelsleden is nodig om het museum financieel te laten overleven.
Werknemer kiest er zelf voor om niet nu aan het werk te gaan. Bij werkhervatting in het museum kan de kans op besmetting niet worden uitgesloten. Voor deze werknemer zijn de gevolgen daarvan onevenredig groot: veel groter dan voor medewerkers zonder een longaandoening. In dat geval wordt van haar een offer gevraagd. Dat betekent dat zij het recht heeft om zelf te beslissen of zij dit offer wil brengen.
Werknemer was tijdens de lockdownperiode ook buitenshuis actief met tennissen, boodschappen etc. Haar angst voor besmetting valt kennelijk wel mee.
Werknemer wil geen enkel risico op besmetting lopen. Zij vindt voor zichzelf de besmettingskans in het museum nog te hoog, ondanks de getroffen maatregelen.
Zij heeft momenteel geen gezondheidsklachten. De werkgever mag verwachten dat zij haar werk in het museum oppakt.
Werknemer heeft het wettelijk recht om zich ziek te melden bij dreigend gevaar voor de gezondheid op het werk.
De besmettingsindicatoren (ziekenhuisopnamen, IC-bedbezetting, sterfte) zijn al weken aan het dalen. Werknemer hoeft dus niet bang te zijn om aan het werk te gaan.
Werknemers angst voor besmetting is reëel: het coronavirus veroorzaakt nog dagelijks tientallen nieuwe besmettingen, ziekenhuisopnames en sterfgevallen. Als bedrijfsarts moet ik met die angst rekening houden. Het moet nog blijken of de daling van het aantal besmettingen etc. aanhoudt na verlichting van de lockdown.
Als werknemer hervat zullen haar collega's een lagere werkdruk ervaren. argument dat let op de gevolgen van beslissingen heet een gevolgenethisch argument of gevolgenargument. Een gevolgenethische redenering rekent met de gevolgen van een beslissing en zegt dat díe beslissing uit moreel oogpunt de beste is, die het grootste welzijn oplevert voor het grootste aantal mensen.
Omdat rechten een ondergrens bepalen, kunnen die wel geschonden worden doordat er andere rechten tegenoverstaan die meer gewicht hebben, maar niet door gevolgenargumenten. In de uiteindelijke afweging zijn de rechten van betrokkenen meer bepalend dan de belangen en wensen. Beginselargumenten perken de werking in van gevolgenargumenten.
We zien aan de linkerkant één beginselargument, tevens het belangrijkste argument voor werkhervatting van Van den Abeelen. Daarom is het rood gemarkeerd. Hier speelt het beginsel van geen schade toebrengen, in dit geval het voorkómen van onnodig lijden door een ernstige besmetting. Het museum neemt passende maatregelen. Daardoor wordt de besmettingskans voor alle werknemers en voor de bezoekers van het museum verminderd, maar niet weggenomen. Aan de andere kant zien we drie beginselargumenten. Het rekening houden met de verhoogde kwetsbaarheid van een werknemer weerspiegelt het beginsel van geen schade toebrengen. Dat is aan de rechterkant het belangrijkste argument. Een tweede beginselargument ligt in het recht van elke werknemer om zich bij gezondheidsgevaar tot de bedrijfsarts te wenden en zich eventueel ziek te melden. Dat recht is ook in de wet vastgelegd. Tenslotte heeft de werknemer het recht op zelfbeschikking als zij besluit niet in het museum te gaan werken. Bij de afweging gaat het er niet om aan welke kant de meeste beginselargumenten staan. Wat het zwaarst is, moet het zwaarst wegen. In dit geval is dat lastig, omdat beide rood gemarkeerde argumenten teruggaan op het biomedische beginsel van 'geen schade toebrengen'. Het beginselargument aan de rechterkant (alternatief B) heeft specifiek betrekking op mevrouw Van den Abeelen, terwijl het aan de linkerkant (alternatief A) gaat om het verminderen van de besmettingskans voor alle medewerkers. Daarom moet eerstgenoemd beginselargument zwaarder
Doordat het museum afdoende beschermingsmaatregelen heeft getroffen, in overeenstemming met het protocol van de museumbranche, is de besmettingskans in het museum aanzienlijk verminderd. BEGINSEL Werknemer behoort duidelijk tot een kwetsbare groep. Het risico op ernstige gezondheidsschade en overlijden na besmetting is voor haar verhoogd. Het is mijn plicht als bedrijfsarts om mensen tegen het risico van gezondheidsschade door het werk te beschermen. BEGINSEL De bijdrage van alle personeelsleden is nodig om het museum financieel te laten overleven. GEVOLG Werknemer kiest er zelf voor om niet nu aan het werk te gaan. Bij werkhervatting in het museum kan de kans op besmetting niet worden uitgesloten. Voor deze werknemer zijn de gevolgen daarvan onevenredig groot: veel groter dat voor medewerkers zonder een longaandoening. In dat geval wordt van haar een offer gevraagd. Dat betekent dat zij het recht heeft om zelf te beslissen of zij dit offer wil brengen. BEGINSEL Werknemer was tijdens de lockdownperiode ook buitenshuis actief met tennissen, boodschappen etc. Haar angst voor besmetting valt kennelijk wel mee.
Werknemer wil geen enkel risico op besmetting lopen. Zij vindt voor zichzelf de besmettingskans in het museum nog te hoog, ondanks de getroffen maatregelen.
Zij heeft momenteel geen gezondheidsklachten. De werkgever mag verwachten dat zij haar werk in het museum oppakt.
Werknemer heeft het wettelijk recht om zich ziek te melden bij dreigend gevaar voor de gezondheid op het werk. BEGINSEL De besmettingsindicatoren (ziekenhuisopnamen, IC-bedbezetting, sterfte) zijn al weken aan het dalen. Werknemer hoeft dus niet bang te zijn om aan het werk te gaan.
Werknemers angst voor besmetting is reëel: het coronavirus veroorzaakt nog dagelijks tientallen nieuwe besmettingen, ziekenhuisopnames en sterfgevallen. Als bedrijfsarts moet ik met die angst rekening houden. Het moet nog blijken of de daling van het aantal besmettingen etc. aanhoudt na verlichting van de lockdown.
Als werknemer hervat zullen haar collega's een lagere werkdruk ervaren. GEVOLG worden gewogen. De andere twee beginselargumenten aan de rechterkant, waarin de autonomie van mevrouw aan de orde komt, bevestigen deze keuze.
Alles afwegende, kies ik voor alternatief B met het kleinst mogelijke gezondheidsrisico voor mijn cliënt en adviseer ik mevrouw Van den Abeelen om op 1 juni niet aan het werk te gaan in het museum, maar zich thuis beschikbaar te houden. Datzelfde meld ik ook aan haar werkgever. Ik geef dit advies in weerwil van het feit dat het museum er alles aan heeft gedaan om de werksituatie optimaal in te richten en er een kans bestaat op een faillissement.
Tussenstap: schadebeperking. Welke acties zijn mogelijk om de schade en de nadelen van de genomen beslissing te beperken of te compenseren? Mijn advies is een maand geldig; daarna wil ik een nieuwe afweging maken, waarbij ik de landelijke COVID-19-incidentie over de maand juni, alsmede de ervaringen in het museum met de gereguleerde openstelling in mijn afwegingen zal betrekken. Ik overleg met de werkgever of het mogelijk is dat de werknemer intussen een vorm van thuiswerk gaat doen, zonder fysieke contacten met bekende of onbekende personen, en telefonisch bereikbaar blijft voor operationele vragen.
Stap 7: Peil wat de inbrenger en de meedenkers voelen bij de genomen beslissing. Alle argumenten zijn meegewogen bij mijn beslissing. Werknemer is het ermee eens. De werkgever is verbaasd. "Wij proberen met man en macht het museum weer open te krijgen en nu dit! Het hele personeel staat achter de maatregelen en werkt mee behalve Renée!" De bedrijfsarts legt uit dat de maatregelen voor iedereen hetzelfde zijn, maar dat de consequenties wat werken betreft per persoon kunnen verschillen. Voor de meeste mensen is werkhervatting verantwoord, maar voor Van den Abeelen lijkt dat op dit moment (nog) niet het geval.
Het morele oordeel van de bedrijfsarts is wel voorlopig, maar het is niet relatief of subjectief. Het is de uitkomst van een zorgvuldige weging van alle rechten en belangen van iedere mens of instantie die door een bepaalde handeling wordt geraakt. Volgens die weging is de keuze op alternatief B gevallen.
De 'moraal' van deze casus Het is belangrijk om onderscheid te maken tussen de begrippen 'kans' en 'risico'. Een kans is een statistische grootheid die de waarschijnlijkheid van een gebeurtenis uitdrukt. Een risico houdt ook rekening met de grootte van het effect. Risico is, in een formule uitgedrukt, het product van kans en effect. Zo is voor het museum, na tenuitvoerlegging van de beschermende maatregelen, de besmettingskans voor allen die er gaan werken in principe gelijk. Het risico is evenwel groter voor werkenden met een verhoogde kwetsbaarheid, doordat voor hen de gevolgen van een besmetting veel ernstiger kunnen zijn. Dat speelt mee bij de afweging die hier gemaakt is. Zie het eerste argument van alternatief B.
Deze casus kan bijdragen aan goede bedrijfsgezondheidszorg bij het verlichten van de lockdown. De coronapandemie en de lockdown is voor chronisch zieke en kwetsbare patiënten een bijzonder ingrijpende ervaring geweest en is dat nog steeds. Het zijn de patiënten met een onderliggend lijden, zoals mevrouw Van den Abeelen, die het hardst worden geraakt door COVID-19. Het zijn ook deze patiënten, zoals in de verpleeghuizen, die tijdens de crisis onvoldoende zijn beschermd. Bij de verlichting van de lockdown verdienen deze patiënten extra zorg en aandacht van de bedrijfsarts. Op de eerste plaats vanwege het nog steeds aanwezige risico dat zij alsnog door het coronavirus kunnen worden besmet, met mogelijk ernstige tot zelfs fatale gevolgen. Ten tweede vanwege de angst, de onzekerheid en de druk waarmee werknemers als mevrouw Van den Abeelen mogelijk te kampen hebben. De bedrijfsarts is dan het baken dat kwetsbare werknemers tegen de ziekte, de druk en de valse schaamte ('ik ben een lafaard') en schuldgevoelens ('ik laat mijn collega's zitten') beschermt. Werknemers als Van den Abeelen zouden het recht moeten hebben om in vergelijkbare situaties zelf te beslissen of ze aan het werk willen. Maar misschien moet de bedrijfsarts ze daar wel eerder voor behoeden. 'Wacht maar even hoe de verlichting van de lockdown uitpakt'. Ondertussen kan de bedrijfsarts adviseren over preventieve maatregelen tegen COVID-19-besmetting.
De moraal: bij een pandemie gaat de bescherming van de gezondheid -in brede zin -vóór arbeidsre-integratie en participatie. De bedrijfsarts is weer even alleen maar heelmeester.
|
Some data suggest persons with cancer are more susceptible to SARS-CoV-2-infection and to develop coronavirus disease 2019 (COVID-19) compared with normals (1% [95% Confidence Interval (CI), 0.6, 1.7%] versus 0.1% [0, 0.12%]), but these estimates are controversial and it is unclear if this increased risk applies to persons with all cancer types [1] [2] [3] [4] [5] [6] [7] [8] .
One study of 125 hospitalised persons with haematological cancers had a 10 percent (6, 17%) case rate of COVID-19 but none of their subjects had chronic myeloid leukaemia (CML) [9] . We performed a cross-sectional survey of nonhospitalized persons with CML receiving tyrosine kinaseinhibitor (TKI)-therapy in Hubei Province to explore the prevalence and clinical features of COVID-19 during the SARS-CoV-2 pandemic. Prevalence of COVID-19 in persons with CML, 0.9 percent (0.1, 1.8%) was substantially higher than normals but lower than hospitalised persons with haematological cancers. Clinical features of COVID-19 in our subjects and otherwise normal persons were similar. We identified co-variates associated with an increased risk of developing COVID-19. Persons with these co-variates may benefit from increased SARS-CoV-2-infection surveillance and possible protective isolation.
From February 15, 2020 to April 10, 2020 persons with CML receiving TKI-therapy from Hubei Province were recruited from 29 centres of the Hubei Anti-Cancer Association. An online questionnaire was distributed and collected by physicians at each centre. Persons with CML or their family (if they were too sick or died) were asked to complete the questionnaire which included two dimensions (Supplementary 1). The 1st included 16 questions assessing demographics, co-morbidities, CML-related data including diagnosis, therapy and response. The 2nd included 12 questions related to COVID-19 including exposure history, symptoms of acute respiratory illness such as fever, cough, shortness of breath and fatigue, diagnosis, treatment and outcome. Missing or unclear data items were collected and clarified by direct communication between physicians and the patient, their family and/or health-care providers. The study was approved by the Ethics Committee of Union Hospital, Tongji Medical College who waived the requirement for written informed consent. Data were analyzed as of Aril 11, 2020. Diagnosis, monitoring and response to TKI-therapy of CML Diagnosis, disease phase, monitoring and response to TKItherapy were based on European LeukemiaNet recommendations [10] .
Infection was confirmed by qualitative real time polymerase chain reaction (qRT-PCR) for SARS-CoV-2. COVID-19 was diagnosed according to the World Health Organisation criteria (https://apps.who.int/iris/bitstream/handle/10665/ 331506/WHO-2019-nCoV-SurveillanceGuidance-2020.6eng.pdf). Severity of COVID-19 was graded as follows (http://www.nhc.gov.cn/yzygj/s7653p/202002/3b09b894a c9b4204a79db5b8912d4440.shtml): (1) mild-mild clinical symptoms, no pneumonia on lung CT scan; (2) commonfever, cough and lung CT with pneumonia; (3) severerespiratory distress (respiratory rate > 30/min, oxygen saturation (O 2 Sat) ≤ 93% at rest and/or ratio of arterial oxygen partial pressure to fractional inspired oxygen ≤300 mmHg (Pa O2 /FI O2 ); and (4) critical-criteria of respiratory failure and mechanical ventilation, shock, organ failure (other than lung) and/or intensive care unit hospitalisation. Therapy of COVID-19 was according to the Novel Coronavirus Pneumonia Prevention and Control Programme of the National Health Commission of China (http://www.nhc.gov.cn/yzygj/s7653p/202002/3b09b894a c9b4204a79db5b8912d4440.shtml).
Outcomes other than death were defined as follows: (1) cure-two successive negative RT-PCR tests >24 h apart and asymptomatic; (2) improved-improvement in signs, symptoms, and laboratory parameters and no progression on lung CT scan; (3) progressing-increase in symptoms and/ or progression of lung CT scan findings; (4) stable-not progressing or improving (http://www.nhc.gov.cn/yzygj/ s7653p/202002/3b09b894ac9b4204a79db5b8912d4440. shtml).
Descriptive analysis results are presented as median (range) or number (percentage) as appropriate. Pearson Chi-square or Fisher's exact test for categorical variables and Mann-Whitney U/Kruskal-Wallis tests (for continuous variables) were used to measure between-group differences. Variables with P < 0.05 were considered significant. Analyses were conducted with SPSS version 22.0 software (SPSS Inc., Chicago, IL, USA).
Among 551 persons with CML receiving TKI-therapy in the Hubei Anti-Cancer Association, 476 filled out electronic questionnaires. Another 75 were completed telephonically by health-care providers. Questionnaires from 21 subjects not resident in Hubei Province during the outbreak were excluded. Data from 530 subjects were included in this report. Two hundred and ninety-six (56%) were male. Median age was 44 years (range, 6-89 years). Ninety-five (18%) were ≥60 years. One hundred and forty (26%) had ≥1 co-morbidity(ies). Five hundred and nineteen (98%) were in the chronic phase (CP) at diagnosis of CML. One subject was synchronously diagnosed with CML by RT-PCR and COVID-19. 346 (65%) were receiving imatinib when they answered the questionnaire, 102 (19%), dasatinib; 59 (11%), nilotinib; 18, HQP1351 (a 3rd generation TKI under study in a clinical trial); 3, ponatinib; and 2, flumatinib (a new 2nd generation TKI developed in China). Median TKItherapy duration was 42 months (range, 1-182 months). All 530 were in the CP when they answered the questionnaire. Eighty-one (15%) had a complete haematologic response (CHR); 52 (10%), a complete cytogenetic response (CCyR); and 387 (73%), a major molecular response (MMR).
All 530 subjects continued resident in Hubei Province during the epidemic. Four reported close contact with SARS-CoV-2 infected persons. Eighteen subjects had an acute respiratory illness including fever (n = 8), cough (n = 7), sore throat (n = 4), fatigue (n = 3), and shortness of breath (n = 2). Eleven had mild symptoms, were isolated at home and recovered within 2-4 days. Seven others had moderate or severe illness and were hospitalised. Two subjects with a negative qRT-PCR for SARS-CoV-19 and no abnormality of lung CT scan were excluded. Four subjects had confirmed COVID-19. One subject was classified as probable COVID-19 because of no qRT-PCR SARS-CoV-2 testing. Cumulative prevalence of confirmed and probable COVID-19 cases was 0.9 percent (0.1, 1.8%).
Comparison of baseline co-variates of the subjects with and without COVID-19
Baseline co-variates of subjects with (n = 5) and without COVID-19 (n = 525) are shown in Table 1 . Subjects with COVID-19 were more likely to have been in the accelerated or blast phase at diagnosis (2 of 5 versus 2%; P = 0.004), no CHR at diagnosis of COVID-19 (2 of 5 versus 2%; P = 0.003), have ≥1 co-morbidity(ies) (4 of 5 versus 26%; P = 0.024) and have contact with confirmed or suspected persons (1 of 5 versus 0.6%, P = 0.037). 1 of 21 (5%) subjects receiving 3rd generation TKI developed COVID-19 versus 3 of 346 (1%) subjects receiving imatinib versus 0 of 162 subjects receiving 2nd generation TKIs, P = 0.095). There was no difference between the cohorts in sex, age and TKI-therapy duration.
Clinical features and outcomes of the five subjects with confirmed or probable COVID-19 are summarised in Table 2 and Supplementary 2. Four subjects with confirmed mild (N = 1) or common (n = 3) COVID-19 had typical symptoms and/or lung CT scan findings; all recovered. An older female subject (case 5) with probable critical severe COVID-19 had typical lung CT scan findings (Fig. 1) and developed ARDS. Her condition deteriorated rapidly and she died of multiple organ failure. Four subjects remained on TKI-therapy during COVID-19 treatment.
Whether persons with CML are immune compromised is controversial [11] . However, TKI-therapy is immune suppressive [12] [13] [14] . Based on these data one might expect a higher incidence and prevalence of SARS-CoV-2-infection and higher case-and case-fatality rates of COVID-19 in persons with CML compared with normals. We found a 0.9 percent prevalence of COVID-19 in persons with CML receiving TKI-therapy in Hubei Provence, ninefold higher than the reported 0.1 percent (0.11, 0.12%) incidence by April 10, 2020 in the general population (http://en.nhc.gov. cn/2020-04/11/c_79032.htm). Clinical features of the confirmed COVID-19 in our survey were like that reported in Hubei Province [15] [16] [17] . One subject died but our sample size is too small to compare with the published case-fatality rate of about 4% [18, 19] .
We found several co-variates associated with an increased risk to develop COVID-19. Including exposure to someone infected with SARS-CoV-2, no CHR and comorbidity(ies). There was also an increased risk of One person did not start TKI-therapy at onset of COVID-19. c 3rd generation TKIs included ponatinib and HQP1351.
developing COVID-19 in subjects in advanced phase CML at diagnosis even when they achieved a CCyR or MMR at the time of the pandemic. One of 21 subjects receiving 3rd generation TKIs developed COVID-19 compared with 3 of 346 subjects receiving imatinib and none of 162 subjects receiving 2nd generation TKIs (P = 0.096). These data suggest possibly different risks but need confirmation. There are no data whether 3rd generation TKI is more immune suppressive than other TKIs. Also, 1 of 2 subjects receiving flumatinib developed COVID-19. However, one subject had synchronous diagnoses of CML and COVID-19 excluding a causative. Why subjects with advanced leukaemia at diagnosis had a higher risk of COVID-19 despite responding well to TKI-therapy is unclear.
There are several limitations to our study. First, there were selection biases. Because the survey was made available online, respondents were self-selected. These persons had computer access and competence and tended to be proactive in seeking information and resources for their care. As such, the likelihood of detecting SARS-CoV-2infection and COVID-19 is higher than in the general Hubei population. Second, not all 18 subjects with acute respiratory illness were tested for SARS-CoV-19-infection so our prevalence estimate may be an under-estimate. However, In summary, our survey suggests that although persons with CML receiving TKI-therapy developing COVID-19 may be higher than the general population the absolute case-rate is very low and clinical features are like normals. Persons with no CHR, with co-morbidity(ies), with advanced phase at diagnosis despite responding to TKItherapy and those exposed to someone with SARS-CoV-2infection may benefit from increased surveillance and possible protective isolation Author contributions WML, QJ, LM designed the study. DYW, JMG, GLY, ZZY, YY, ZCC, SMC, CCW, XJZ, WC, LSS, HC, YSZ, QL, and JQ, collected the data. WML, QJ and RPG analyzed the data and help prepare the typescript. All authors approved final approval and supported submission for publication.
Conflict of interest The authors declare that they have no conflict of interest.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
Anxiety, which is an emotion characterized by tension and restlessness, was associated with mental and physical discomfort [1] . The prevalence of anxiety disorders was globally up to 15% in the general population [2] . It was reported that generalized anxiety disorder (GAD) was present in 8.4% of adults from the Manaus Metropolitan Region [3] . As a type of psychological stress, anxiety will trigger a series of physiological events and cause a decrease in immunity [4] . Sufferers of anxiety can experience other physiological symptoms including fatigue, abdominal pain, headaches, dizziness, nausea, palpitations, and urinary incontinence [5] . Sleep, considered as a fundamental operating state of the central nervous system, may become one of the most important basic dimensions of brain function and mental health [6, 7] . Good sleep quality is important for optimal health status and wellness [8] . Previous researches suggested that better sleep quality could improve emotional well-being [9] [10] [11] . A web-based study showed a high prevalence of GAD and poor sleep quality in the Chinese public during COVID-19 outbreak [12] . Some studies have also reported an association between sleep quality and anxiety using Pittsburgh Sleep Quality Index (PSQI) [8, [13] [14] [15] [16] . However, another study has shown that it is difficult to determine the cause and effect of sleep disturbance and anxiety [17] .
Although studies have shown that poor sleep quality was associated with a higher prevalence of anxiety, these findings have been limited by study population and methodological variations, especially among underdeveloped rural populations in China [18] . China has a population of 1.4 billion, among which the rural population is large, accounting for 39.4% of the total population, according to the data from the China National Bureau of Statistics in 2019 [19] . The prevalence of mental disorders has been dramatic in China in recent decades [20] . And the prevalence of anxiety in rural China is higher than that in urban areas [21, 22] . Moreover, Henan is the most populous province with 48.3% of the rural population in 2018 [19] . And more than one fifth of the participants had poor sleep quality [23] . Focusing on people living in undeveloped region might be significant. Moreover, genetics showed that genes associated with circadian rhythms have been also related to a range of mental disorders [24] . The association between sleep quality and anxiety symptoms may provide references for the neurobiological mechanisms of mental disorders. In this context, to fill in the gap and add to the evidence for adverse effect of poor sleep quality on anxiety symptoms, this study was aimed at investigating the relationship between sleep quality assessed by PSQI and anxiety symptoms in Chinese rural population aged 18-79 years, and determined whether age, lifestyles and chronic diseases modified this association.
The participants of the current study were included from the Henan Rural Cohort, which was registered in Chinese Clinical Trial Register (Registration number: ChiCTR-OOC-15006699) and has been previously described in detail [25, 26] . Briefly, villagers aged 18-79 years were recruited from July 2015 to September 2017 by a multistage cluster sampling method from the local general population. Firstly, five rural counties in Henan province (central, south, north, east, and west) were chosen through simple cluster sampling on the basis of the local sufficient population source, support of the masses and local leadership, and medical conditions. Secondly, one to three rural townships of each county were selected by the local Centre for Disease Control and Prevention. Thirdly, the residents who gave a written informed consent were included as the study sample from each village of the selected township. Finally, a total of 39,259 participants (15,490 men and 23,769 women) who signed informed consent were included.
For the current analysis, a total of 29,995 participants completed the evaluation of anxiety symptoms. Furthermore, participants were excluded if they had missing data on PSQI score (n = 269). Because of the impaired sleep quality in shift workers [27, 28] and cancer [29] , the participants who had self-reported experience of night shift work (n = 1530) or a history of cancer (n = 285) were further excluded to minimize the confounding bias. Finally, a total of 27,911 subjects were included in the present study.
Ethics approval was provided by the Zhengzhou University Life Science Ethics Committee, and signed informed consent was obtained for each participant. In addition, permission to administer each of the questionnaires, measures, or scales in the current study was obtained from every participant.
Data collection was performed by well-trained investigators in a face-to-face interview adopting a structured questionnaire. Demographic variables of participants included gender, age (continuous variable), marital status (married/cohabitation, other), educational levels (primary school or below, junior high school and senior high school or above), smoking status (non-smoker, or current smoker), alcohol consumption (non-drinker, or current drinker), and personal and family history of diseases.
Physical activity levels were classified into three categories: light, moderate and vigorous referenced to the criterion of the International Physical Activity Questionnaire [30] . Additionally, the anthropometric measurement was conducted on the basis of a standard protocol [31] . Height and weight were measured with individuals wearing light clothes and barefoot to the nearest 0.1 kg and 0.1 cm. Body mass index (BMI) was computed by body weight in kilograms divided by square of height in meters.
Information on sleep was collected by PSQI [32] , which consisted of 19 items. The scale that scores 0 to 21 has been widely used to evaluate sleep quality. A previous study reported that at least a cut-off PSQI score of 6 yields a sensitivity of 89.6% and a specificity of 86.5% [32] . Thus, participants with at least 6 PSQI score were considered as having a poor sleep quality in this study. Self-reported night sleep duration was obtained by asking the following question, "What time did you usually go to bed and wake up during the past month?" The sleep onset latency was assessed by the following question: "How long (in minutes) has it taken you to fall asleep each night during the past month?" The sleep initiation time was calculated as bed time plus sleep latency. The night sleep duration was computed on the basis of wake-up time and sleep initiation time [33] .
The anxiety symptoms of participants were collected using the two-item generalized anxiety disorder scale (GAD-2) (feeling nervous, anxious, or on edge and not being able to stop or control worrying) yielding a sensitivity of 85% [34] . The scores of this scale ranged from 0 to 6. To examine the association between sleep quality and anxiety symptoms, this study dichotomized scores of the GAD-2 scale. Participants were defined as having anxiety symptoms if they scored ≥3 in the current study [35] .
Means ± standard deviations (SD) and frequencies (percentages) were presented for continuous and categorical variables, respectively. Multivariable restricted cubic regression spline curves [36] with 3 knots (5th, 50th, and 95th) were fitted to observe the association between continuous PSQI score and anxiety symptoms. Furthermore, the PSQI score was dichotomized to examine the association between poor sleep quality and anxiety symptoms with good sleep quality as reference group by performing logistic regression models. In the fully adjusted model, the potential confounders were adjusted according to the previous studies [14, 37] , including age, gender, physical activity, marital status, smoking status, drinking status, educational levels, average monthly income, BMI, night sleep duration and napping duration.
Considering the gender-specific prevalence of sleep quality, or anxiety, we investigated the associations stratified by gender through the full analyses [22, 23, 38] . Additionally, stratified analyses were conducted to examine whether the association between poor sleep quality and anxiety symptoms was potentially modified by age, gender, marital status, smoking status, drinking status, average monthly income, physical activity, BMI, snoring, hypertension, and type 2 diabetes mellitus(T2DM). Considering potential bias resulting from exclusion of participants, we did several sensitivity analyses to identify the association between sleep quality and anxiety symptoms by including subjects with shift working or cancer. A two-tailed P value of less than 0.05 was determined the statistical significance in the current study. All analyses were run on SAS version 9.3 (SAS Institute) and R version 3.5.1. Table 1 displays the demographic characteristics of participants by anxiety symptoms. A total of 27,911 participants were included in this study. The mean (SD) age was 55.96 (12.22) years; 16,743 (59.99%) subjects were women; the mean (SD) PSQI score was 3.79 (2.73); 6087 (21.81%) individuals were poor sleepers; 1557 (5.58%) subjects have anxiety symptom. Those with anxiety symptoms were more likely to have lower education and income, lower physical activity levels, and have poorer sleep quality. In addition, the difference of demographic characteristics of participants between missing and nonmissing information on PSQI score was reanalyzed. The findings implied that there was no difference except age, educational levels, and napping duration between two groups (See supplementary Table 1 in additional file 1). Likewise, the minor differences were found in both men and women. Thus, the missing data might be random and not affect the robustness of the current research.
Dose-response association between PSQI score and anxiety symptoms Figure 1 presents the association between continuous PSQI score and the prevalence of anxiety symptoms. There is an increased likelihood of anxiety symptoms with the elevated PSQI score for crude model in total population. After additional adjustment for age, gender, physical activity, marital status, smoking status, drinking status, educational levels, average monthly income, BMI, night sleep duration, and napping duration, the association appeared to be slightly enhanced and remained to be significant. Similar findings were available in men and women.
Association between poor sleep quality and anxiety symptoms Table 2 reports results of sleep quality and anxiety symptoms, with less than 6 scores of PSQI as reference category. Compared to the reference group, poor sleep quality (PSQI ≥6) was associated with a higher possibility of anxiety symptoms [odd ratio (OR): 3.85, 95% confidence interval (CI): 3.42-4.33] in total populations, (OR: 4.60, 95%CI: 3.70-5.72) in men, and (OR: 3.56, 95%CI: 3.10-4.09) in women on multivariable analysis.
The results of the stratified analysis for anxiety symptoms are shown in Fig. 2 (Fig. 2) .
In sensitivity analyses, similar results were observed when we included the participants with shift working (See supplementary Table 3 in additional file 1).
The current study was the first to focus on the association between poor sleep quality and the odds of anxiety symptoms in a large Chinese rural population. The results of this study demonstrated that a positive association between poor sleep quality and anxiety symptoms was significant in both men and women among a large Chinese rural population. After the stratified analysis, stronger positive associations were observed among individuals aged 60 or above, smokers, and individuals with light level of physical activity, obesity and T2DM.
China is a country with 1.4 billion people and 39.4% of Chinese people live in rural areas [19] . According to the newest data from China National Bureau of Statistics in 2018, the rural population of Henan were higher than the national level, accounting for 48.3% of the total population of the whole province [19] . However, in rural areas, most people with poor sleep quality were either not treated or treated inadequately. There is an obvious gap between the levels of urban and rural health and the health level of rural residents is relatively low [39] . A previous study showed that the inequality and imbalance of medical facilities were found across 203,801 villages and 1609 townships in Henan province [40] . To our knowledge, there is still the lack of evidence on association between the sleep quality and anxiety symptoms in rural regions. Therefore, this study is meaningful among rural population. The study found that those with anxiety symptoms were more likely to have a lower income and be exposed to unhealthy lifestyles, such as lower level of physical activity.
This study presented the association between poor sleep quality and anxiety symptoms, which is consistent with a previous study that poor sleep quality was strongly associated with anxiety symptoms among women [41] . Several previous studies on patients with coronary artery bypass graft identified that a better sleep quality was related to a lower anxiety level [9, 42] . Some studies have also reported an association between sleep quality and anxiety symptoms using PSQI [8, [13] [14] [15] [16] . For example, one of the studies found that patients with an increase in preoperative state anxiety had a 18.6% higher odds of prevalent poor sleep quality (95% CI: 1.074 to 1.115), after controlling the potential confounders [8] . Consistent with previous studies, our study found that poor sleep quality was associated with a higher possibility of anxiety symptoms (OR: 3.85, 95% CI: 3.42-4.33) in total population. Although previous studies have reported an association between poor sleep quality and anxiety symptoms, these studies were limited to specific populations (adult women [13] , older Chinese [14] , cardiovascular patient [15] , after coronary artery bypass surgery [8] , T2DM [16] ). Few studies have been sufficiently large to show a statistically significant modifying effect of sleep quality and anxiety in an overall healthy population. However, another study found that poor sleep quality was associated with both depression and anxiety, whereas only daytime sleepiness was associated with anxiety symptoms in older adults [43] .
The mechanisms behind the association between sleep quality and anxiety remain unclear. Moreover, lack of sleep can bring a range of adverse neurobehavioral outcomes and physiological changes, such as inattention, depression, impaired glucose tolerance, and sympathetic nervous system activation [44] . These changes in sleep quality may manifest as the onset of mental illness, including anxiety. Nevertheless, these findings based on cross-sectional data, are limited to confirming a causal relationship between sleep quality and anxiety, and the exact mechanisms are needed to be studied.
The study suggests that the government should strengthen public education, use mass media to actively publicize the need for exercise and guide them on how to carry out appropriate activities. At the same time, epidemiologists should focus on the identification and early intervention of the elderly population over 60 years old.
In the family, relatives should pay attention to the sleep quality of family members, develop good sleep habits, so as to reduce the occurrence of anxiety symptoms. Future prospective studies should examine multiple facets of sleep quality with the aim of better characterizing sleep quality and improving treatments.
This study has the following strengths. First, poor sleep quality is a symptom of many health problems, such as anxiety symptoms, hypertension [26] , overweight/obesity [45] , coronary heart disease [46] and so on. This study thoroughly clarified the association between poor sleep quality and anxiety symptoms in a large-scale rural population from the Henan rural cohort study. Second, this is the first analysis of this association in rural China so far. It gave us a chance to understand the relationship between sleep quality and anxiety symptoms in the Chinese rural population.
The current study also has some limitations. First, this was a cross-sectional study, and there is the possibility of reverse causality. Long-term longitudinal studies are recommended to characterize the overall changes in sleep quality and anxiety symptoms. Second, although the PSQI is a well-validated scale of sleep quality, the recall bias on the results cannot be inevitable thoroughly. Third, we did not consider the factors such as living arrangements, or necessary medical treatments which might impact sleep, or anxiety symptoms. Finally, the population is not nationally representative, therefore, the extrapolation of the results may be limited.
A dose-response association between PSQI score and increased odds of anxiety symptoms was observed. Moreover, this study also found that poor sleep quality contributed to the increased prevalence of anxiety symptoms in a Chinese rural population, especially in those who were 60 years old or above and smokers, as well as those with light level of physical activity and obesity. In addition, these findings suggest that people should develop good sleeping habit and reduce occurrence of poor sleep quality to furthermore prevent anxiety symptoms.
Supplementary information accompanies this paper at https://doi.org/10. 1186/s12889-020-09400-2.
Additional file 1: Table S1 . Difference of demographic characteristics of participants with between missing and non-missing on PSQI score stratified by gender. Table S2 . OR (95% CI) of sleep quality and anxiety symptoms included participants with shift working stratified by gender. Table S3 . OR (95% CI) of sleep quality and anxiety symptoms included participants with shift workers stratified by gender.
Abbreviations PSQI: Pittsburgh Sleep Quality Index; GAD: Generalized anxiety disorder; BMI: Body mass index; GAD-2: The two-item generalized anxiety disorder scale; SD: Standard deviation; T2DM: Type 2 diabetes mellitus
|
Infectious diseases remain a significant contributor to the burden of disease in low-and middle-income countries (LMICs). Leading communicable diseases from AIDS, tuberculosis and malaria to diarrheal diseases, measles, and lower respiratory infections claim upwards of eleven million lives in these countries each year [1] . The burden falls disproportionately not only on some countries, but also on vulnerable parts of the population. Notably, 95 percent of deaths from respiratory infections and 98 percent of deaths from diarrheal diseases occur in LMICs [2] ; and diarrhea, pneumonia, measles, and malaria take many lives of children under five. Similarly, infectious diseases like schistosomiasis, hookworm and malaria contribute to anemia, worsening outcomes both of mother and child in pregnancy, while syphilis also adversely affects neonatal mortality. As for diseases that make up substantial portions of global disease burden-HIV/AIDS, tuberculosis and malaria-over 95 percent of the deaths caused by each of these diseases are also in LMICs. The toll of infectious diseases comes in mortality and morbidity, lost work productivity and economic losses, and the drag effect on those trapped or tipped into poverty by illness.
The attainment of the Millennium Development Goals (MDGs) is closely entwined with progress in reducing the burden of infectious diseases. MDG 6 focuses on combating HIV/AIDS, malaria and other diseases while steps towards meeting MDG 4 (reducing child mortality), MDG 5 (improving maternal health) and MDG 7C (improving basic sanitation and sustainable access to safe drinking water) also relate to the treatment of infectious diseases [3] . MDG 8E (providing access to affordable essential drugs in developing countries in cooperation with pharmaceutical companies) and MDG 8F (making available benefits of new technologies, especially information and communications, in cooperation with the private sector) not only align with these goals, but also suggest instrumental means for accomplishing them [4] .
From triple therapy for AIDS and directly-observed therapy for TB to oral rehydration salts and vaccines for childhood killers like diarrhea and pneumonia, the past decade has witnessed significant advances. Between 2000 and 2010, 45 global health technologies were introduced for use in resource-limited settings, and the current R&D pipeline for global health includes 365 medical products at various stages of development [5] . Nevertheless, the unfinished agenda will require further technology innovation. Health technologies diagnose, prevent and treat disease; reduce the risk of disease such as through improved sanitation; mitigate health outcomes (for example, by combating malnutrition); and ensure better delivery of these interventions.
Still many of these technologies remain out of reach to millions who might benefit. For LMICs as a group, the annual health expenditure per capita is just under US $200 as of 2010 [6] . While middle-income countries together have seen a substantial rise in per capita annual health expenditure--from around US$50 in 1995 to about US$220 in 2010--health spending for households in low-income countries, overall, has remained much lower, climbing just US$16 over the same period to US $26 per capita. Though the price of antiretroviral (ARV) therapy has fallen dramatically by 99 percent over the past decade [7] , less than a quarter of those in need of ARVs actually received treatment in 2010 [8] . This leaves at least 29.5 million people living with HlV residing in low-and middle-income countries still without treatment, based on 2009 prevalence data [9, 10] . Such technologies can also come at considerable cost to these health systems. Out-of-pocket payments remain the primary source for covering the cost of medicines in lowand middle-income countries [11] .
Applying a systems thinking perspective, more might be done to reshape the enabling environment for innovating such health technologies. Meeting these twin goals of innovation and access is key to bringing technologies from bench to bedside. Focusing on the pharmaceutical value chain might offer insights on how best to ensure technology innovation and access appropriate to disease-endemic countries. Delivering effective artemisinin combination therapy for malaria provides a case example where several interventions along the value chain shape the availability and affordability of this treatment.
Delivering Existing Health Technologies to Those in Need--Artemisinin Combination Therapy for Malaria The Affordable Medicines Facility-Malaria (AMFm) has sought to stem irrational treatment approaches to malaria. AMFm worked to negotiate for lower prices of artemisinin combination treatment (ACT) with manufacturers, subsidize the purchase of ACTs through copayments, and support interventions that encourage rational use of ACTs. By sharply reducing the retail prices of ACTs, the initiative hoped to displace oral artemisinin monotherapies and other medicines, such as chloroquine and sulfadoxinepyrimethamine, to which resistance had emerged. UNITAID, the Gates Foundation and DFID supported the piloting of this intervention with US$216 million while the Global Fund complemented this, committing up to US$127 million towards supporting interventions for scaling up ACT use effectively [12] . The premise and preliminary findings behind this pilot illustrate the complexity of delivering even existing innovations to those in need. Early findings from the Health Action International pricing survey in six African countries showed that the AMFm price bested the originator brand and the lowest-priced generic, approaching but not yet consistently beating the prices of irrational alternatives, like chloroquine and sulfadoxine-pyrimethamine [13] . In less than a year though, six out of eight pilot countries met or exceeded benchmarks for availability, price and market share of quality-assured ACTs, both in rural and urban areas [14] . Efforts in Tanzania showed that accredited drug dispensing outlets could complement these upstream interventions by increasing access to and dispensing of subsidized artemisinin combination therapy [15] . However, the volatility in artemisinin supply upstream has caused significant price fluctuations in the pricing of the active pharmaceutical ingredient of this drug. Though it may not offset entirely the greater demand for artemisinin from an expanded AMFm, the anticipated advent of artemisinin, sourced from microbial production, by late 2012 is a technology that may help stabilize and secure the supply of this critical drug for ACT [16] .
Innovation can take several forms. For disease-endemic countries, the technology challenge may not only be one of novel invention, but also local adaptation of an existing technology. Such adaptation might be to target locally endemic strains, as for meningococcal or pneumococcal vaccines, where the introduction of such technologies in LMICs has lagged. Or such health technologies might require being lyophilized, kept in cold chain storage, or perhaps soon stabilized in silk films [17] for transport in tropical climates. Or perhaps crucially, technology adaptation to meet the resource constraints in disease-endemic countries--where trained health personnel or health care infrastructure may be wanting--might be required.
Of course, encouraging innovation for disease-endemic countries is not necessarily the same as engaging disease-endemic countries in the process of innovating health technologies for their settings. Looking at industrysupported phase 3 clinical trials conducted by the twenty largest U.S.-based pharmaceutical companies, a third of such studies are now being conducted solely outside of the United States, and a majority of study sites now reside outside of the United States [18] . Much of this globalization of clinical research is to low-and middle-income countries. Shifting that involvement upstream--from conducting clinical trials in disease-endemic settings to working on the bench science--would also mark progress in building the innovation capacity of disease-endemic countries.
In considering access, several related dimensions-each corresponding to a different part of the value chain of delivering technologies-matter. The primary focus for technology innovation is therapeutic access, which refers to whether diagnostics, drugs or vaccines are under research and development or not in the pipeline. Financial and structural access issues also play important roles in enabling diffusion of such technologies. The failure to deliver existing technologies, like AIDS treatment or oral rehydration salts for children with diarrhea, illustrates the challenges of financial and structural access barriers, respectively. Therapeutic access refers to how well the R&D pipeline works, financial access to the market, and structural access to the delivery system.
Bridging the gulf between markets and disease endemicity By deaths and DALYs, the focus on HIV/AIDS, tuberculosis and malaria on the global health landscape is understandable: the "Big Three" diseases account for over 4.3 million deaths per year [19] . While the burden of disease falls disproportionately still on low-and middle-income countries, there remains a significant paying market in industrialized countries as well. Boosted by public funding for these diseases, private sector interest is also correspondingly greater.
In a survey of R&D projects focused on neglected diseases, BioVentures for Global Health found 218 R&D projects on AIDS, tuberculosis and malaria--over four times the number of projects on diarrheal diseases (including rotavirus, cholera, typhoid fever, shigellosis, enterotoxigenic E. coli) and pneumococcal disease. By contrast, these other diseases of poverty, specifically various causes of diarrheal disease and pneumococcal infection, claim 3.8 million deaths each year [20] . The number of projects cannot tell the full story: the state-of-the-art and the technical feasibility of next steps vary by disease. Still the difference should provoke reflection on how priorities are set.
Traditional distinctions among Type I diseases (those endemic in both North and South, but with a sizable paying market in the North) and Type II diseases (also endemic globally, but disproportionately so in developing countries, such as AIDS and tuberculosis) and Type III diseases (endemic only in developing countries) depend on the size of potential paying markets for these diseases. Such distinctions may help bound likely contributions and interest from the private sector in these areas of pharmaceutical discovery and R&D. Where there are no paying markets, market failures result.
Bridging this gulf, public sector investments can play an important role in driving this innovation. Between 2007 and 2010, the G-Finder survey found that 97 percent of the research funding backing neglected disease research projects originates from high-income countries [21] . Nearly 64 percent of all funders' monies comes from the United States. Most publicly funded product development partnerships concentrate their missions around a unifying disease and technology focus, but alternative approaches spanning a cluster of diseases raise the prospect of sharing a common technology platform.
As a market, emerging economies have caught the attention of the global pharmaceutical industry. On the one hand, the industry eyes the growing middle and upper classes of these emerging economies as potential paying customers. On the other hand, 960 million of the bottom billion in the world live in middle-income countries. This is in sharp contrast to two decades ago when over 90 percent of the poorest of the poor lived in low-income countries. Most of these poor people live in countries such as India, Pakistan, Indonesia, and Nigeria, which have graduated from low-to middle-income status [22] . This has implications for pharmaceutical innovation and access. For example, in establishing tiering schemes that give preferential access, from licensed technology to product prices, this tension has made firms reluctant to offer such breaks to middle-income countries. This is reflected in the challenges that the Medicines Patent Pool faces in recruiting companies to license voluntarily their HIV/AIDS drugs for generic production as part of fixed-dose combinations. Similarly, the anchoring of the GlaxoSmithKlineinitiated Pool for Open Innovation Against Tropical Neglected Diseases and WIPO Re:Search Consortium--both efforts to pool building blocks of knowledge and to license them royalty-free to those working on neglected diseases--places limits on geographic coverage to only least developed countries as a starting condition.
This underscores the different circumstances facing emerging economies and other developing countries. The stark reality is that less than a quarter of all biomedical research publications and less than a third of all clinical trials in Africa even relate to diseases that comprise nearly 50 percent of the burden of disease on the continent [23] . The same study found that both the research institutions most productive in publishing journal articles and filing patents were concentrated in a few countries in Africa (notably South Africa, Nigeria and Egypt). Examining patterns of collaboration on biomedical publications, over threequarters of these journal articles were co-authored with collaborators, but only 5.4 percent engaged institutions in more than one African country while the preponderant majority of articles involved collaborators in Europe or the United States. This pattern of collaboration, in part, motivated the formation of the African Network for Drugs and Diagnostics Innovation, with its focus on intra-African coordination and collaboration on R&D.
Mobilizing public and private sector resources, product development partnerships (PDPs) have stepped in to address the market's failure to bring forward treatments for neglected diseases. A study of 63 neglected disease projects at the end of 2004 tells an interesting story [24] . Half of these projects were conducted by multinational firms, invariably on a "no profit-no loss" basis. Projects from the other half were conducted on a commercial basis by small-scale entities: small and medium enterprises, developing country firms and academic research institutions. Arguably, these groups viewed the opportunity costs quite differently than did the multinational corporations. This may be an important insight in targeting incentives for companies to help overcome the market's failure.
A survey of product developers engaged in drug and vaccine R&D for neglected diseases suggested, however, that only 40 percent of such projects involved a PDP [25] , with the majority going forward without a PDP partner. These included strong involvement of academic institutions, particularly in the study of neglected tropical diseases. Less than 3 percent of biotechnology companies globally participate in neglected disease R&D, but this still comes to more than 100 firms. Thirteen of the twenty largest pharmaceutical companies are involved in such projects. Multinational drug firms have also begun to shift their patterns of R&D, with recent boosts in approvals of new drugs targeting orphan diseases [26] . In some respects, orphan diseases and neglected diseases may be two sides of the same coin. By value, both face small markets: orphan diseases with small numbers of patients, but treatments that may command high prices in industrialized countries; neglected diseases with millions of patients, but hopes for very low cost treatments per episode. Facing the looming cliff of patent expirations, some of these firms might also be seeing the opportunity costs of smaller markets differently than in the past.
Emerging economies may play an increasingly strategic role in this space. Already India and China are making significant investments in domestic R&D efforts. Among BRICS countries (Brazil, Russia, India, China, and South Africa), foreign assistance to other developing countries has risen at near double-digit rates from 2005 to 2010 [27] .
For Brazil, health has been a significant component of the country's foreign assistance budget. Brazil contributed approximately US$130 million to WHO and PAHO between 2005 and 2009 [28] and pledged US$20 million to the Global Alliance for Vaccines and Immunizations over a twenty-year period [29] . Brazil also provided over US$37 million to UNITAID between 2006 and 2011; and in May 2011, Brazil enacted a law that donates US $2 to UNITAID per international flight, a contribution estimated to grow into a US$12 million commitment annually [30] . Brazil has also initiated a public-private partnership to transfer ARV production technology to Mozambique [31] . Together with the government of Mozambique and the Vale Foundation (the philanthropic arm of a Brazilian mining firm with operations in Mozambique), Brazil's Institute of Technology in Pharmacos (Farmanguinhos) provided US$23 million to help build an ARV manufacturing plant [32] . Once operational, the plant will produce five ARV drugs and other pharmaceuticals, including a pain reliever and a drug for high blood pressure. In another example of South-South technology transfer, Farmanguinhos, with the Indian drug manufacturer Cipla, also partnered with the Drugs for Neglected Diseases Initiative (DNDi), a product development partnership, to bring to market a new fixeddosed artemisinin-based combination treatment, ASMQ, the first ACT with a three-year shelf life in tropical climates [33] . Farmanguinhos and Cipla agreed in 2008 to manufacture and provide ASMQ to the public sector in developing countries at cost (with a target price of US $2.50 per full adult treatment).
Innovation for neglected diseases, more often than not, is viewed less as an exemplar to emulate and more as an exception. Just because pharmaceutical and biotechnology firms contribute to neglected disease projects, some might argue that this does not translate to new models of R&D collaboration from which broader, more generalizable lessons might be derived for more commercially viable therapeutic areas.
The perception is, by value, that markets in LMICs are considered small; and the diseases, are typically classified as Type II or III. After all, over three-quarters of global expenditures on pharmaceuticals are spent on 16 percent of the world's population living in high-income countries [34] . However, the patent cliff faced by multinational pharmaceutical firms, the burden of noncommunicable diseases requiring treatment in low-and middle-income countries, the availability of public sector funding and philanthropic capital, and the growing footprint of indigenous innovation in the pharmaceutical sector of emerging economies might prompt rethinking this view.
Spurred initially by the struggle for affordable medicines to treat HIV/AIDS, a decade-long policy process-beginning with the WHO's Commission on Intellectual Property, Innovation and Public Health [35] , continuing with the World Health Assembly's adoption of the Global Strategy and Plan of Action on Public Health, Innovation and Intellectual Property [36] , and leading up to the recently released recommendations of the WHO's Consultative Expert Working Group on Research and Development: Financing and Coordination [37] -has sought to reshape the way in which health technologies come to market in resource-limited settings.
Several developments may shape and nurture the direction this approach to innovation emerging from developing countries takes. Important elements of this enabling environment include 1) access to building blocks of knowledge; 2) strategic use of intellectual property and innovative financing to meet public health goals; 3) collaborative norms of open innovation; and 4) alternative business models, some with a double bottom line.
Access to the building blocks of knowledge is key to innovation and technology transfer. Journal subscription costs pose barriers to accessing the latest developments in research. In response, WHO has supported the Health InterNetwork Access to Research Initiative (HINARI). Working with publishers, HINARI provides tiered access to journal articles for low-and middle-income countries. This approach has been imperfect, with all BRICS countries, Indonesia, Thailand and other middle-income countries ineligible for the discounted subscription arrangements despite sizable poor populations and under-resourced research institutions in these countries.
While such voluntary agreements provide work-around solutions to improved access to the research literature, The past decade has witnessed shortfalls in the supply of influenza vaccine to meet the H1N1 pandemic and heightened concerns over the spread of emerging infectious diseases like SARS and avian flu. Not wishing to be last in queue for a vaccine or treatment, developing countries have sought assurances from WHO's Global Influenza Surveillance and Response System (GISRS) that the sharing of virus samples would result not just in vaccines for industrialized countries, but also affordable access for such technologies in their settings as well. The Pandemic Influenza Preparedness Framework lays out a standard material transfer agreement, a system for benefit sharing and for contributions from pharmaceutical manufacturers and public health researchers, and hortatory measures encouraging Member States to urge manufacturers to set aside vaccines for influenza strains with pandemic potential for stockpiling and use by developing countries, to engage in technology transfer efforts, and to make such vaccines and antivirals available under tiered pricing arrangements [38] . Whether these measures will suffice in the event of a pandemic will surely be tested in the years ahead.
Anticipating the need to scale up this technology, WHO has provided seed grants to 11 manufacturers in low-and middle-income countries to establish or enhance their capacity to produce pandemic influenza vaccine. The Netherlands Vaccine Institute was not only enlisted to provide training, but also to support an influenza vaccine technology platform or hub to facilitate the transfer of technology to these countries. Building upon a "robust and transferable monovalent pilot process for egg-based inactivated whole virus influenza A vaccine production," next steps are being planned for work under this technology platform [39] . Such technology platforms might be potentially developed for other diagnostic and therapeutic areas to accelerate innovation. The strategic management of intellectual property rights is central to securing access to these building blocks of knowledge [40] . Even when publicly funded, patented inventions may not be easily accessible by other researchers or for use in disease-endemic countries. There may be even less incentive to share when such inventions are proprietary and privately funded. However, especially in the neglected disease space, both tiering and pooling arrangements have afforded greater access to needed research inputs. By tiering, preferential discounts or even royalty-free access to research inputs is provided, often bounded by field of use or geography. For AIDS drugs and many vaccines, tiered pricing arrangements offer price breaks to resource-limited countries. Further upstream, many neglected disease projects benefit from tiered licensing arrangements, whereby access to compounds otherwise inaccessible due to their proprietary nature is provided. The R&D pathway for repurposing existing drugs can be shortened considerably when such access is coupled with pre-clinical and even clinical data on these compounds. Under pooling arrangements, the transaction costs of bringing together needed inputs for research are lowered and cross-licensing enabled.
With varying degrees of resulting access and other forms of success, a patchwork of ad hoc tiering and pooling arrangements have emerged. Both policy instruments have their place in ensuring broader access to research inputs. Collaborating with the Medicines for Malaria Venture, GlaxoSmithKline's release of the chemical structures and assay data of over 10,000 compounds with activity against the malaria parasite, Plasmodium falciparum, is such an example. This deposit into the pool of the European Bioinformatics Institute's ChEMBL database and the U.S. NIH PubChem database provides wide access under a Creative Commons CC0 license (work dedicated to the public domain with waiver of copyright) [41] . Tiering and pooling often work in tandem. The Medicines Patent Pool, the Pool for Open Innovation Against Neglected Tropical Diseases, and the WIPO Re:Search Consortium all represent pooling arrangements, each with different tiered access conditions on inputs into the pool. Some work by aggregating research inputs upstream in the R&D pipeline, and others recruit patented drugs downstream in the R&D pipeline. For example, the Medicines Patent Pool's mission is to secure voluntary licenses from pharmaceutical companies of HIV/AIDS drugs that might be used in new fixed-dose combinations or pediatric formulations. In so doing, the generic licensing of such combinations is meant to stir greater competition and thereby innovation and affordability of such treatments. Negotiating licenses, limited by field and geography, has proven challenging. To date, the only license with a company for AIDS medicines has been with Gilead. This license restricts manufacturing to India, but extends access to resulting products to a wider range of countries (though still excluding several middle-income countries) than under previous arrangements [42] .
As with open access initiatives, funders can set norms supportive of sharing knowledge and also lower the risks of crossing the valley of death from pre-clinical to clinical testing. The NIH recently launched the National Center for Advancing Translational Sciences (NCATS).
Created to "catalyze the generation of innovative methods and technologies" that might bring diagnostics and therapeutics to first-in-human trials, NCATS provides a spectrum of intramural and contracted services that enable small firms and academic research institutions to secure needed preclinical support [43] . Funders can also invest their philanthropic capital in ways to ensure more affordable access in exchange for non-diluting cash for biotechnology start-up companies. With Gates Foundation support, the University of California, Berkeley extended co-exclusive licenses for the microbial synthesis of artemisinin--a key antimalarial drug--to Amyris Biotechnologies and the Institute for One World Health [44] . The University made the license royalty-free for malaria indications in exchange for Amyris Biotechnologies' commitment to produce artemisinin at no profit for treating malaria in the developing world. Amyris, in return, also received substantial philanthropic capital support (US$12 million) from the Gates Foundation. This enabled Amyris to develop proof of concept on its microbial synthesis process, which also has a dual market application for synthesizing biofuels.
Establishing collaborative norms of open innovation has deep roots in public sector funding of science. The Wellcome Trust and the U.S. National Institutes of Health (NIH) engaged leading centers involved in the Human Genome Project to agree to the Bermuda Rules, whereby investigators pledged to deposit gene sequences of every 1000 base pairs within 24 hours of completion into GenBank [45] . The intent was not only to encourage sharing of data, but also to prevent unnecessary patenting through defensive publishing. NIH has also issued guidance to grantees for the "timely release and sharing of final research data" for others to use [46] and for minimizing unnecessary encumbrances on the dissemination of publicly funded research tools [47] .
Increasingly, the pharmaceutical sector has recognized the value of open innovation [48] . From Merck's early efforts to place expressed sequence tags into the public domain to corporate participation in the Single Nucleotide Polymorphisms Consortium, pharmaceutical firms have understood the need to harness ideas from outside their walls to fuel in-house R&D innovation. The potential application for open innovation is most evident for emerging infectious diseases, where the patenting process might be outpaced by the speed of a spreading pandemic. The need for pooling patents on SARS to enable non-exclusive licensing anticipated the potential problems of intellectual property holdings on developing a diagnostic or treatment for the illness. However, the SARS epidemic came and went before the patent pool could be launched, highlighting the potential value of open innovation norms for emerging diseases.
Taking this a step further, fledgling efforts to conduct open source innovation in biomedicine have also begun.
By contrast, open source innovation involves openness and transparency with the goal of shared research collaboration, but also prevents third parties from acquiring proprietary rights over what is generated by the community. Notably, India's Council for Scientific and Industrial Research has embarked on the Open Source Drug Discovery (OSDD) initiative, initially focused on TB drugs. Using a web-based platform, hundreds of volunteers at a network of universities, both in India and elsewhere, have collaborated on re-annotating the Mycobacterium tuberculosis genome. Their collective efforts made this possible in just four months. Supported with government funding, participants submit projects for open peer review, contribute to on-line efforts under a system of microattribution, and agree to sharing their work under a click-wrap license. In not pursuing a conventional path to drug discovery, OSDD hopes to "bring down the cost of drug discovery significantly by knowledge sharing and constructive collaboration" and "to discover new chemical entities and to make them generic as soon as they are discovered, thus expediting the process of drug discovery" [49] .
For many neglected diseases, the case for an alternative business model is clear. On the demand side, the need to provide affordable health technologies responsive to public health needs and the local context of low-and middle-income countries is urgent. On the supply side, high volume, closer to marginal cost pricing might match this need. Fortunately, there is growing capacity among developing countries to respond. Developing country vaccine manufacturers already supply 64 percent of vaccines procured by UNICEF [50] ; and more than 80 percent of annual purchase volumes of antiretroviral drugs destined for low-and middle-income countries come from Indian generic manufacturers [51] .
Bridging the supply and demand side, there is need for a more efficient model for R&D innovation. This will likely require innovation of both products and processes. Described by various names, the idea of jugaad innovation, a Hindi word meaning "an innovative fix; an improvised solution born from ingenuity and cleverness; resourceful" captures, in part, the spirit of such efforts [52] . Others have applied the descriptor, "frugal innovation" [53] . However, it is important not to connote that such innovation will come as a quick fix. Nor will it come out of a sense of thriftiness alone, without a deeper understanding of the effective use of resources. The enabling conditions for such innovation must be carefully cultivated and nurtured, and this may require piloting new models for collaborative R&D efforts-sharing resources, risks and rewards more effectively [54] .
Working under the constraints of a resource-limited environment, such innovation must be resource-effective, but not substandard. This efficient use of resources is reflected in the Innovation Efficiency Index scoring of countries like China and India compared to developed countries. This index compares outputs from innovation against the constraint of available inputs for innovation in a country, and by this measure, China and India come out first and second in the world [55] . Perhaps fittingly, some have suggested that such innovation embraces Mahatma Gandhi's tenet of getting "more from less for more people," [56] best captured in his quote: "Earth provides enough to satisfy every man's need, but not every man's greed" [57] .
The enabling environment for resource-effective innovation will likely accelerate not only health technologies for infectious diseases, but also for the growing burden of non-communicable diseases. Technologies like echocardiography have clinical value whether the valvular heart disease traces to rheumatic fever, drugs or atherosclerosis. The process of technology transfer, the clinical trial platforms, and the training of scientists also build towards the common purpose of bringing these new health technologies from bench to bedside.
What is exciting for global health is that the worldboth North and South--has much to benefit from these new approaches to innovation. Under increasing budget constraints themselves, industrialized countries might welcome cost-effective interventions born out of the genius of innovation under resource constraint. Such innovation might be potentially disruptive, in that a product targeted at the base of the economic pyramid-an initial market marginalized by competitors--might migrate "up market" displacing established technologies [58] . From point-of-care diagnostics to more affordable drugs and vaccines, the necessity to bring more appropriate and affordable health technologies may indeed spark a new approach to innovation.
The Program on Global Health and Technology Access, of which ADS serves as the Program Director and QRE an Associate in Research, provides consultancy services on intellectual property management for a product development partnership, the Drugs for Neglected Diseases Initiative (DNDi).
Authors' contributions ADS led in the conceptualization and design behind the overall organization of the manuscript and substantively contributed to the collection of information and data presented in this manuscript. ADS also substantially wrote the majority of the manuscript and also gave the approval of the final submitted version of the manuscript. QRE contributed to the analysis and research that provided the evidence and examples given in this paper. QRE also assisted ADS with the drafting of the manuscript. All authors read and approved the final manuscript.
Authors' information ADS is a Professor of the Practice of Public Policy and Global Health at Duke University in the Sanford School of Public Policy. He is also the founder and the Director of the Program on Global Health and Technology Access. Key focus areas of the Program's interdisciplinary work include developing alternative biopharmaceutical R&D models from precompetitive collaboration to public financing of R&D and evaluating models of innovation and access to health technologies. Dr. So has led the Strategic Policy Unit for ReAct, an independent global network for concerted action on antibiotic resistance; advised the Thematic Reference Group for Innovation and Technology Platforms for Infectious Diseases of Poverty, a working group of WHO's Special Programme for Research and Training In Tropical Diseases; and served as a member of the U.S. Institute of Medicine's Committee on Accelerating Rare Disease Research and Orphan Product Development. He has authored works on innovation for global health R&D, including a commissioned paper for the Institute of Medicine on "Sharing Knowledge for Global Health" and a paper on "Approaches to Intellectual Property and Innovation that Meet the Public Health Challenge of AIDS" for the Global Commission on HIV and the Law. This year, Dr. So was named one of the Robert Wood Johnson Foundation's Investigators in Health Policy Research. Previously, Dr. So was Associate Director of the Rockefeller Foundation's Health Equity program, where he co-founded a cross-thematic program on charting a fairer course for intellectual property rights, shaped the Foundation's work on access to medicines policy in developing countries, and launched a multi-country program in Southeast Asia, "Trading Tobacco for Health," focused on enabling countries to respond to the public health challenge of tobacco use. QRE is an Associate in Research for the Program on Global Health and Technology Access, helping lead analyses for the global health policy research projects of the Program.
|
SARS-COV-2; Nucleocapsid (N) protein; COVID-1; Drugs; Vaccine.
The ongoing COVID-19 pandemic caused by SARS-COV-2 is a highly infectious disease posing a severe threat to worldwide public health [1] , [2] , [3] [4] . SARS-COV-2 has the incredible potential of human to human spreading, contributing to rapid global dissemination [5] , [6] . To date, the virus has been reported to cause 24,357,067 cases with 830,150 deaths and 16,890,125 recoveries. Despite tremendous efforts done on mitigating the virus transmission and developing therapeutics in almost every country around the globe, we still have no specific treatment and cure yet [6] , [7] , [8] , [9] . This prompted us to devise highly accepted in silico approaches with the ultimate aim to assist in the rapid design of new classes of drugs and vaccines for SARS-COV-2.
SARS-COV-2 is a single-stranded 30 kb long RNA genome virus [6] , [10] , [11] . The main region of the genome, named ORF1a/b, covers 2/3 of the length and codes nonstructural proteins (nsp) [12] . The remaining genome encodes four essential structural proteins which include a small envelope (E) protein, a matrix (M) protein, a nucleocapsid (N) protein, and a spike (S) glycoprotein [6] , [13] , [14] . The current coronavirus anti-viral regime primarily targets 3c-like (3CL) protease, papain-like (PLP) protease, and the S protein [15] , [16] . As protease inhibitors may act nonspecifically on the host homologous protease, there is a risk of associated host cell toxicity and can induce severe side effects. The S protein is highly vulnerable to mutations, enabling this protein to acquire a different pattern of host cell receptor binding, which in turn, aid in the protein escape from targeted therapeutics [15] . Considering this, novel strategies are obligatory to curtail infections caused by SARS-COV-2.
The N protein of SARS-COV-2 binds to leader RNA and plays several pivotal roles in RNA transcription and replication, and thus is considered an attractive pharmacological target [17] , [18] , [19] . Its primary function is to produce a ribonucleoprotein (RNP) complex, key to the formation of highly ordered RNA conformation essential for viral RNA replication and transcription and modulating metabolism of infected cells [19] . Additionally, this protein regulates host-pathogen interactions that involve reorganization of actin, the progression of the host cell cycle and apoptosis [20] . From an architectural point of view, the protein is composed of three distinct but highly conserved portions; the C-terminal domain (CTD), the N-terminal domain (NTD), and Ser/Arg (SR) rich linker [17] . CTD functions as a dimerization domain, whereas NTD and SR are responsible for binding to RNA and direct phosphorylation, respectively [6] , [21] . The crystal structure of the NTD from SARS-COV-2 (PDB ID: 6M3M) has been found to interact with 3´ end of the SARS-COV-2 genome through several key residues mediating infectivity and give specific electrostatic distribution [17] . Such structural information is essential for accelerating drug discovery against this appealing drug target to block SARS-COV-2 production. Furthermore, the N protein has high expression during infection, is highly immunogenic and has the potential to induce protective immune responses targeting SARS-COV-2 [22] , [23] .
Herein, we adopted a comprehensive in silico methodology to identify potent anti-viral leads, protective antigens and diagnostic markers against SARS-COV-2 N protein. Findings of this study may promote the discovery of new anti-viral drugs and vaccination strategies against this high priority virus.
The complete methodology used in the present work is summarized in Fig.1.
The study commenced with the retrieval of the crystal structure of N protein from the PDB database available with a PDB tag of 6M3M [17] . The structure was determined by X-ray diffraction up to a resolution of 2.70 Å. The 3D structure was then treated in UCSF Chimera [24] for structure editing to optimize receptor energy. During the process, the protein first underwent processing using a steepest descent algorithm keeping the step size to 0.02 Å and number of cycles to 100. Afterwards, the conjugate gradient algorithm was applied on the structure for the default ten steps. Both algorithms tended to clean the structure by improving local interactions, fine-tuning widespread structural errors and moving the structure towards local minima. Next, missing hydrogen atoms were added to the receptor and charges to standard and non-standard residues were assigned by mean of AMBER ff14SB [25] and AM1-BCC [26] , respectively. Both initial crystal N protein and post-treated minimized structures were examined using a PDBSum PROCHECK analysis [27] to assist in the selection of starting receptors in a high throughput anti-viral scaffold screening process.
The Asinex anti-viral library delivers a meaningful starting point by arranging chemical entities of potent anti-viral activities and good safety profile for the discovery of new powerful leads. The compounds are also easy to access and purchasable for testing in experimental assays. The anti-viral library was retrieved in SDF format and subsequently filtered in PyRx software [28] to select compounds that fulfil the Lipinski rule of five [29] . As per this rule, only compounds with molecular weight ≤ 500 Dalton, MlogP ≤ 4.15, N or O ≤ 10, and NH or OH ≤ 5 are selected. The primary library had 6827 compounds that, after deletion of molecules violating Lipinski rule of five, reduced to 4860. This new list of compounds was minimized for energy minima and converted to .pdbqt format to be ready for docking study.
The amino acid sequence of the SARS-COV-2 N protein was scanned for potential B and T cell epitopes capable of evoking strong but protective immunological responses. To accomplish this objective, we employed the Immune Epitope Database (IEDB) [30] where first linear B cell epitopes were predicted using Bepipred Linear Epitope Prediction 2.0 setting cutoff score of >0.5. In parallel, T cell epitopes were also predicted starting with MHC-I alleles using IEDB recommended 2020.04 (NetMHCpanEL 4.0) considering a reference set of alleles (S- Table 1 ). The epitopes were sorted on IC50 score basis, and those with score < 500 nM were selected. The MHC-II alleles prediction was accomplished by selecting IEDB 2.22 method and full HLA reference set (S- Table 1 ). Each of the predicted epitopes was then evaluated for their potential of evoking immunological response by scanning the epitopes in VaxiJen (cut off antigenic score ≥ 0.4) [31] . Next, the shortlisted immunodominent candidates were checked for allergenicity via AllerTOP version 2.0 [32] . Non-allergens were then toxicity filtered using ToxinPred [33] to opt for non-toxic epitopes. Additionally, the non-homologic and virulent nature of the epitopes was verified by using the epitopes in Blastp search against human (Homosapien: 9606) and VirulentPred [34] , respectively. Lastly, IFN-gamma inducing epitopes were filtered via IFNepitope web server [35] .
All entries of the SARS-COV-2 N protein available in the NCBI COVID-19 datahub were retrieved and used to examine predicted IFN epitopes conservation through an online IEDB conservancy analysis tool [36] . It was important to include population coverage of the epitopes in the study as it gave clear directions about the percentage of a specific population likely to respond to the epitopes. This was achieved by employing the IEDB epitope conservation analysis tool [37] .
The final set of conserved epitopes that provided the greatest population coverage were fused to each other, followed by addition of an adjuvant molecule to the epitope peptide. The 3D structure of the full ensemble was created ab initio using 3Dpro [38] . Loops of the model structure were refined to strengthen structural stability using GalaxyRefine version 2 [39] . Quality assessment of the ensemble 3D model was made employing free available tools: Ramachandran plot of PDBsum [27] , ERRAT [40] , VERIFY-3D score [41] , PROSA Z-score [42] . Then, the structure was minimized to the local energy minima and passed through downward analysis of vaccine design. Different physicochemical parameters of the vaccine were predicted using ProtParam [43] . Solubility and aggregation-prone regions of the vaccine were predicted using Protein-Sol [44] and Aggrescan3D 2.0 [45] , respectively.
During the vaccine design process, identification of candidates showing an optimized affinity for a wide range of MHC HLA alleles is fundamental. The MHC clustering analysis was performed using MHCcluster v2.0 [46] . Further, the immune response profile of the vaccine construct was understood using an agent based immune simulator -the C-ImmSim server [47] . The server employed a position specific scoring matrix to spot immune dominant epitopes and machine learning methods to elucidate immune interactions. Most of the simulation parameters were treated as default: adjuvant = 100, number of antigen injection = 1000, random seed chosen = 12345, and the vaccine was injected with non LPS. The 1000 units of antigen are considered suitable to induce an appropriate immune response to the viral antigen [48] . The simulation steps allowed are 1100 and volume to 110. C-ImmSim server has been successfully applied in several studies to understand host immune system dynamics in response to an antigen [49]
A blind docking approach was applied for docking both anti-viral ligands with the N protein and vaccine ensemble with TLR3 (PDB tag: 1ZIW). In the case of anti-viral ligands, docking was performed with AutoDock4 [54] allowing the central XYZ dimensions search to be restricted to 9 .9921 Å on the X-axis, -3.8536 Å on the Y-axis and -12.7487 Å on the Z-axis. This gives dimensions on the XYZ plane as 31.6231 Å, 46.5257 Å, and 43.2407 Å, respectively, and as net, the whole surface of the receptor molecule was covered to allow ligand molecules to bind freely to the hotspot points of the N protein. Each ligand molecule was docked 10 times to the receptor and the best binding pose was selected by looking for the one with best binding affinity score in kcal/mol (more negative score indicates good binding affinity). The vaccine ensemble docking with innate immune receptor (as a test case here we used TLR3) was assessed in Patchdock [55] , the generated complexes were refined with FireDock [56] , and the best complex with minimum global energy was considered for visualization and assay. [57] , [58] , [59] , [60] . Both AutoDock4 and FireDock are best for performing blind docking and generating intermolecular poses that bind best to each other, hence achieving highly stable complexes. Both sets of software are the most citable forms of docking software, widely used and are freely available [61] , [62] , [63] , [64] , [56] , [65] . A blind docking approach was used in the present work by providing the complete surface of receptors (N protein in case of drug molecules identification and TLR3 in case of vaccine ensemble docking). This overcomes the limitations of the specific docking that, in majority cases gives false-positive results [66] . Additionally, we proved the docking procedure by docking the co-crystalized ligands to the receptors through the same procedure used in virtual screening of Asinex antiviral library and vaccine ensemble docking to the TLR3 [67] . Both docking protocols results revealed coherent results, thus validating the docking protocol. Further, we employed widely accepted and more accurate molecular dynamics simulation and binding free energies methods to validate the good affinity of the drug molecules and vaccine ensemble [68] [69] [70] .
Molecular dynamics simulations for both top complexes of anti-viral ligand with N protein and vaccine ensemble with TLR3 were performed using AMBER18 package [71] . The N protein parameters were generated using a ff14SB force field [25] whereas an Amber force field (GAFF) [72] was chosen for anti-viral ligands. To record topology files of both complexes, the leap module [73] was employed. The systems were neutralized by adding an appropriate number of counterions. Next, both systems were submerged in a TIP3 water box, allowing a padding distance of 12 Å. The waterbox with submerged N protein-drug complex and TLR3-vaccine complex are depicted in Fig.2 . Systems minimization was achieved by running 1500 rounds of steepest descent and conjugate gradient to clean the complexes for unfavorable structural clashes. Heating of systems was done for 100 ps with a gradual increase from 0 K to 300 K, applying pressure of 1 atm. Afterheat, systems were equilibrated for a time period of 100 ps at a constant temperature of 300 K. A production run for each system was completed at time scale of 100 ns. In the process, SHAKE algorithm [74] was applied to constrain all covalently bonded hydrogen atoms of the systems. Periodic boundary conditions were used in the solvation box by the canonical ensemble. Temperature was kept constant at 300 K using a Langevin thermostat [75] and the non-bounded interaction threshold was treated as 8.0 Å. Ewald simulations were utilized for long range interactions. Structural dynamics of the complexes were elucidated through several statistical parameters analyzed through the CPPTRAJ module [76] of the AMBER. Visualization of the snapshots was done by means of UCSF Chimera [24] and VMD software [77] .
Waterbox presentation of drug (top) and vaccine ensemble complex with respective receptors (colored by yellow cartoon). The drug and vaccine construct is shown by in green sphere.
Estimation of binding free energies for biomolecular complexes is a good way of validating intermolecular strength of interactions and shedding light on the dominancy of a particular chemical energy contributing to overall stability. The MMPBSA method of AMBER is an easy and relatively straight forward approach for quantifying binding free energies of ligand(s)
Journal Pre-proof docked to a receptor, though this method does not account for entropy contribution [78] . The MMPBSA binding free energy was estimated using equation given below,
ΔG is the net binding free energy for a given system by subtracting the sum of receptor and ligand combine binding energy from the complex. ΔG MM is the gas phase energy change estimated by molecular mechanics and consists of van der Waals and electrostatic energy. ΔG solv is the solvation free energy change and is the product of polar and non-polar energy. The latter term is calculated via the solvent accessible surface area (SASA).
The Waterswap method implemented in the Sire package permits estimation of absolute proteinligand binding free energies [79] . The reaction coordinate is constructed during the process which swaps the protein bound ligand to an equivalent volume of water present in the binding pocket. This method uses all available processing cores of a computer node, slow in-process and converges the free energy averages that took at least five days. The average binding free energy is estimated simultaneously through thermodynamic integration (TI), free energy perturbation (FEP) and Bennetts Acceptance Ratio (BAR) methods [70] . Waterswap was run on simulation trajectories of last 10 ns to estimate the absolute binding free energy of the systems.
A primary phase of protein minimization was applied to SARS-COV-2 N protein to lower its overall potential energy. This was necessary to make protein conformation as close as possible to natural biological systems that are dynamic and low in potential energy to ease spontaneous interactions. However, a minimization event may introduce bad contacts in the structure and disturb the conformation of the molecule, which in turn might affect the compound ranking in the docking-based virtual screening process. The minimization process revealed that minimized N protein is a bit better compared to the pre-minimized original N protein structure.
Ramachandran plot investigation showed that both original and energy minimized structures have the same percentage of residue distribution in all four quadrants as can be seen in S- Fig.1 .
Statistically, both structures secured 89.4% and 10.6% of residues in the most favoured and additionally allowed regions whereas no residue was plotted in generously allowed and disallowed regions. According to the G-factor assay, the minimized protein had a better overall G-factor score of 0.06, reflecting no unusual features in the structure opposed to the original structure (G-factor score of -0.14). Secondly, the PROSA Z-score was calculated which indicated overall good quality and a non-erroneous structure of the minimized N protein. The Zscore for pre-minimized N protein was -5.06 while it was -5.16 for the minimized form. Based on this evidence, we used the minimized N protein structure in the downward framework.
J o u r n a l P r e -p r o o f
High throughput structure-based virtual screening was performed using SARS-COV-2 N protein as a receptor and drug-like molecules from the Asinex anti-viral library as ligands in the process. As a blind docking strategy was employed, the entire surface of the protein was exposed for binding of the drug molecules. The molecules bound to three different binding pockets of loop region 1, β-sheet core, and loop region 2 of the enzyme as shown in Fig.3 . Chemical network of interaction of the compounds at respective docked sites. From A to E tags represents compound 1 to 5, respectively. The different colour of discs can be interpreted as residues of the N protein and can be understand as: dark green discs (hydrogen bonding residues), light green discs (van der Waals residues), pink discs (pi-pi stacked residues), purple discs (pi-sigma residues), and cream discs (alky and pi-alkyl residues).
J o u r n a l P r e -p r o o f
A complete protein sequence was first analyzed for continuous B cell epitopes that predicted 6 epitopes ranging in length from 7mer to 28mer of score greater than default cut off (0.5). Such B cell epitopes are the recognition and binding sites of the adaptive immunity B cells. Once activated, the B cell transformed into a mature form, differentiate and produce soluble antibodies. Afterward, the antibodies bind to the epitopes and activate humoral adaptive immunity by triggering formation of neutralizing toxins, labeling the pathogen for destruction via T cell immunity. Therefore, B cell epitopes mapping holds a central role in vaccine designing [80] . The predicted B cell epitopes were examined for T cell epitopes. T cell vaccine induces protective cellular immunity and are significant in targeting mutating viruses like SARS-COV-2. Furthermore, T cell vaccines are regarded more effective than conventional B cell epitopes [81] , [82] . In the present investigation, both B and T cell epitopes were used to design highly efficacious vaccines. Antigens that are displayed on the antigen presenting cells bound to the major histocompatibility complex molecules are recognized by the T cell receptors present on the surface of T cells. T cell epitopes are surface epitopes via two classes: MHC-I and MHC-II, identified by two distinct T cytotoxic and T helper cells. Several T cell epitopes are reported for each B cell epitopes with IC 50 values <50 nM depicting high affinity for the reference set of MHC alleles. Next, the T cell epitopes were subjected to MHCphred assay to filter only those epitopes that associated with great affinity with the DRB*0101 allele, a prominent allele in the human population [83] . All T cell epitopes showed great binding with the DRB*0101 allele with an IC50 value of < 100 nM. T cell epitopes then analyzed for their ability of provoking the host immune system. This was achieved by passing the T cell epitopes through the VaxiJen server and selecting those with predicted score higher than the default 0.4. Antigenic T cell epitopes were subsequently filtered through allergenic and toxicity check, and only non-allergenic and non-toxic epitopes were picked. The final set of potential epitopes selected for vaccine ensemble design is listed in Table 1 and their exo-membrane topology is shown in Fig.5 .
The MHC molecules are extremely polymeric and thousands of MHC molecules are known that are spread across the world population and in different ethnicities. Thus, peptide-based vaccine design that covers the majority of these alleles could provide a wonderful approach for the design of a broad spectrum and highly effective vaccine that does not show any ethnically biased population coverage.
Peptide-based vaccines are weakly immunogenic, therefore, the final set of epitopes were fused with the help of rigid AYY linkers to produce a multi-epitope peptide [84] [85] (Fig.6A) . AYY linkers aid in keeping the epitopes separated and aid in easy recognition and processing of the epitopes by the host immune system. Also, an adjuvant in the form of β-defensin was added to the N-terminal site of the epitope peptide for further strengthening of the immune provoking ability of the epitope's peptide. β-defensin activates lymphokines production which in turn results in Ig production specific to the antigen along with activation of cellular immunity [86] . Ligation of the adjuvant with the multi-epitope peptide was done through a rigid EAAAK linker. This linker is also rigid and promotes easy immune recognition and processing. The complete construct structure was modelled ab initio as no suitable template was available for homologybased modelling (Fig.6B) . The structure has the majority of its residues (80%) plotted in the most favoured region, 12.90% in the additional allowed region, 5.70% in the generously allowed region, and 1.40% in the disallowed regions (Fig.6C) . From secondary structure point of view, the structure maximum of alpha helix (53.0%), 9.6% of 3-10 helix, and 37.3% of beta turns, and gamma turns (Fig.6D) . The overall vaccine ensemble is antigenic with a score of 0.5060, soluble with a probability score of 0.940 and does not harbour any transmembrane helices; thus it is a good candidate for experimental follow up. The weight of the construct is 8.9 KDa, theoretical PI of 9.36, stability index of 39.9, GRAVY score of -0.602 and aliphatic index of 53.01. The vaccine ensemble is also predicted to be highly soluble with predicted scaled solubility value of 0.775 (threshold, 0.45) and does not contain any aggregation-prone regions. The average aggregation score of the vaccine is -1.11, which is far less than the cutoff score of 0.0.
The vaccine ensemble molecule interacted with a large numbers of HLA alleles in MHC I class compared to MHC II class. This infers the increased potential of the designed vaccine ensemble to be recognized by the vast majority of polymorphic HLA alleles, maximizing many fold epitopes presentation the immune cells leading to providing strong immune protection J o u r n a l P r e -p r o o f capabilities. The MHC I and MHC II HLA clusters are tagged as S- Fig.4 and S-Fig.5 , respectively.
Loop regions of the vaccine ensemble were modelled/refined to present its closest possible structure for the downward analysis. The structure was refined in consecutive rounds covering regions: Arg14-Ser34, Asn53-Ser57, Gly63-Ser70, Tyr74-Ala83, and Ala47-Ala48. The output structure was further refined for global and local conformations. An improved vaccine ensemble structure was revealed in which the percent of residues in the Rama favoured regions was increased to 96.3% compared to 93.8% in the original model. The clash error score in the original structure was 29.4 and was found only 2.2 in the refined model. The MolProbity score is relatively low (1.241) illustrating the good quality of the refined model. Overall, the galaxy energy of the refined vaccine ensemble is -1744.48 which is significantly less than the initial structure (132.01), a confirmation of good stability.
An in silico immune response model of the vaccine ensemble was generated that illustrated primary, secondary and tertiary immune responses are credible in clearing the pathogen (Fig.7) .
The IgM + IgG is critical in showing an immune protection response to the antigen whereas IgG2 is the least productive. Similarly, significant interleukin and cytokine reactions are witnessed, IFN-gamma being the most key immune protective factor (Fig.7) . Formation of different isotypes of the B cell immunity to the antigen demonstrates a fundamental role of humoral immunity against the pathogen and subsequent creation of antigen memory. Increased production of cytotoxic and helper T cell populations with corresponding memory formation further affirm the role of T cell immunity complementing B cell immunity in protecting the host from the pathogen. A high population of dendritic cells, as well as that of macrophages, makes it evident that the set of epitopes used in the construction of multi-epitope vaccine ensembles is attractive for activation of all components of the host immune system aiding in combating the pathogen.
J o u r n a l P r e -p r o o f
The vaccine ensemble was docked with TLR3, an immune receptor from innate immunity that prompts viral recognition and induces type I interferon production. Blinded protein-peptide docking was performed to predict the predominant orientation of the vaccine ensemble with respect to the TLR3 receptor. In total, 100 conformations of the vaccine ensemble were generated, followed by refinement of each complex to opt for the highly stable conformation. J o u r n a l P r e -p r o o f J o u r n a l P r e -p r o o f
Disulfide engineering is a directed approach of introducing disulfide bonds in a protein, and is a logical approach of emulating molecular interactions stabiilty required in many industrial and biomedical applications [87] . The chimeric vaccine sequence was subjected to Design 2.0 server which highlighted 5 pairs of residues with high bond energy ranging from 1.26 kcal/mo to 6.75 kcal/mol. The pairs of residues are: Ile2-Ala61, Arg17-Cys40, Val20-Cys41, Cys21-Lys26, and Pro51-Pro62 were mutated to enhanced vacine stability (S- Table 2 ). The original and mutated vaccine structures are shown in Fig.6E . Furthermore, the vaccine sequence was reverse translated to improve the codon usage of the sequence as per Escherichia coli expression system to get maximum expression of the vaccine during wet lab experimentation. The codon adaptation index of the vaccine is 1 which is an ideal value for enhance expression. Lastly, the vaccine sequence was cloned into pET28a(+) expression vector (Fig.6F) . `
The conformational stability of the SARS-COV-2 N protein in complex with the top drug molecule and vaccine ensemble was deciphered by running 50-ns of MD simulations. Structural stability of the systems was monitored first by calculating carbon alpha distance of superimposed 50,000 snapshots of the MD simulation ( Fig.9 (top left) ). For the drug complex, a small number of minor structure fluctuations were noticed in the protein suggesting the structure has endured less conformational changes during the course of simulation. The average rmsd estimated for the drug complex is 1.38 Å with maximum observed is 2.53 Å. The variations were investigated by visualizing snapshots at regular intervals (0-ns, 10-ns, 20-ns, 30-ns, 40-ns, and 50-ns) and superimposed in UCSF Chimera [88] (Fig.10A) . The 3D alignment revealed an rmsd of 1.3 Å that clearly showed the drug complex stability. In the process, the compound was seen in different conformations where 4-isopropylphenol region of the molecule is showing stability at the initial docked site but the 2,5,5,7,9-pentamethyl-2,3,4,4a,5,10b-hexahydropyrano[3,2c]chromene-3-carboxylic acid is stretching along the length allowing the molecule entry deep in the cavity with formation of extra van der Waals contacts as depicted by the binding free energy estimation in section. Further, these movements are the result of flexible loop regions of the N protein that propel the molecule to alter its conformation by adjusting the molecule along the channel of the receptor. This depicts binding free energy methods, with the compound enjoying binding at the cavity site and is an attempt at shwing stable binding as simulations progress. In the same way, rmsd analysis of the vaccine ensemble with TLR3 was carried out (Fig.9 (top left) ). The average rmsd of the TLR3-vaccine ensemble is 3.34 Å. Variations in the rmsd are mainly because of the ligand structure move in an attempt to get a highly stable and immune cell recognizable pose, resulting in force applied on the flexible regions of the TLR3. This was affirmed by taking regular snapshots at interval of 10-ns and superimposition in UCSF Chimera that revealed an rmsd of 1.124 Å (Fig.10B) . The receptor rmsd was followed by ligand rmsd to affirm the ligand binding stability with the receptors (Fig.9 (top right) ). An average rmsd of 1.30 Å and maximum of 2.41 Å were reported for the drug compound. Again minor ups and downs in ligand rmsd were spotted, reflecting conformational moves as stated earlier, though these adjustments are in favour of the increased system stability. The vaccine ensemble rmsd is J o u r n a l P r e -p r o o f very low (0.15 Å) demonstrating very stable behavior of the vaccine with TLR3. Next, compactness of the N protein structure was tested using the radius of gyration tool (Fig.9 (bottom left) ). The average gyration value of the N protein in the presence of the drug along the simulation is 14.56 Å and maximum value observed is 14.87 Å. This clearly shows that the secondary elements are highly compact in the N protein 3D structure and show stability in the drug molecule presence. The same compact complex system was noted for the TLR3 and vaccine ensemble with a mean radius of gyration value of 33.72 Å. The number of hydrogen bonds formed between the drug/vaccine with the receptors is shown in Fig.9 (bottom right) . On average, the drug molecule is in contact with the N protein docked site via 1 hydrogen bond through the simulation period whereas the vaccine produced a robust network of hydrogen bonds with TLR3 that, on average, is higher in each simulation frame.
The MMPBSA is now commonly applied on biological systems to model molecular recognition and is a central focus in molecular simulations [89] . This method uses reasonable computational cost and is in routine use to estimate binding free energies of small ligand molecules bound to a large biomolecule receptor [89] . The MMGBSA analysis, which is complementary to the MMPBSA, revealed binding with considerable affinity to the N protein as depicted by the net total energy of -30.4504 kcal/mol. This net energy can be split into components as: complex (-10452.14 kcal/mol), receptor (-10414.58 kcal/mol), and ligand (-7.10 kcal/mol). The gas phase energy and solvation energy contribution to the net energy of the system are significant; the latter dominates by polar energy as the favourable (-15.27 kcal/mol) opposed to minor support from non-polar energy (-4.27 kcal/mol). The gas phase energy is the attribute of van der Waals energy (-37.49 kcal/mol) in contrast to the non-favourable part from electrostatic energy (26.60 kcal/mol). The detailed N protein and drug molecule MMGBSA binding energies of the complex, receptor, ligand and the net energies are provided in S- Table 3 . In the MMPBSA method, the system net total binding energy is -23.59 kcal/mol (delta gas phase (10.89 kcal/mol) + delta solvation energy (-12.70 kcal/mol)). Together, both gas and solvation contributions play a balanced role in the binding affinity of the compound for the receptor. As in MMGBSA, the solvation energy is the output of a good polar solvation role (-9.43 kcal/mol) compared to less non-polar energy (-3.27 kcal/mol) in MMPBSA. The gas phase energy van der Waals component as reported in the MMGBSA is the prime factor in the compound interaction (37.49 kcal/mol) with the protein whereas, due to the lack of formation of ionic moieties that can give the electrostatic nature of interactions, the columbic interaction is insignificant (26.60 kcal/mol). The complete data of the MMPBSA analysis for N protein and drug molecule are presented in S- Table 4 . Likewise, the TLR3-vaccine ensemble was highly stable with MMPBSA net binding energy of -47.96 kcal/mol. The stabilizing factor was revealed to be the gas phase electrostatic energy (-1359.83 kcal/mol) in the system lower energy state along with -67.79 kcal/mol of van der Waals energy. The polar solvation energy is the non-favourable contributor (1390.67 kcal/mol) to the net solvation energy (1379.67 kcal/mol), in contrast to little favourable energy from non-polar energy (-10.99 kcal/mol). The full data of MMGBSA and MMPBSA for TLR3 and vaccine ensemble are given as S- Table 5 and S- Table 6 , respectively.
To underline residues that are essential to ligand binding and are favourable in complex stability, per residue decomposition was performed. Common residues of MMGBSA and MMPBSA with negative average binding energy were categorized as essential amino acids and were vital in interactions with the ligand (Table 3) . For instance, residues such Trp5, Thr29, Asn30, Ile94, and Ile105 were demonstrated as hot spot residues due to their profound contribution in complex stabilization. For the TLR3-vaccine, Ser387, Glu460, Tyr462, Tyr465, and Asn662 are hotspots in anchoring the vaccine ensemble at the docked position.
J o u r n a l P r e -p r o o f
Efficient development of new vaccines and biologically useful drug molecules usually take years of research efforts and is a multibillion dollar gamble. The use of available pharmaceutically active and safe anti-viral agents in silico and in wet lab experiments as a substitute is a swift approach for uncovering medication which may efficiently deal with deadly and evolving viral infections. The conventional drug discovery pipeline takes a decade for safe anti-viral therapy development [90] [91] . For diseases that are highly contagious like COVID-19, we don't have enough time and require to speed up the process by screening available drugs and repurposing them against this new disease threat in a process called drug repurposing. In an NIH-funded study published in the journal -Nature‖, scientists screened a library of 12,000 existing drugs against SARS-CoV-2 using laboratory-grown human cell lines and non-human primates. They found 21 drugs that showed potential to thwart SARS-CoV-2, 13 out of which can be safely given to people. Most of these drugs have tested clinically against autoimmune diseases, HIV, osteoporosis and other complications [92] . Recently, an international team led by Sumit Chanda at Sanford Burnham Prebys Medical Discovery Institute, together with Yuen Kwok-Yung's team at the University of Hong Kong, employed a high throughput method to rapidly screen 1,987 compounds from the ReFRAME library [93] that shortlisted 100 drugs that reliably hinder the virus growth by at least 40% . Further, 21 drugs were filtered based on dose-response relationship. Out of these 21 drugs, one drug was remdesivir, which is the FDA approved drug originally developed against Ebola virus, as a possible treatment option against COVID-19 [94] . Additionally, studies on the most potent drugs in the list reduced the viral load by 65% to 85%. Most potent in the list were apilimod (a drug in clinical trials to treat rheumatoid arthritis, and J o u r n a l P r e -p r o o f Crohn's disease) and clofazimine (a 70 year old FDA drug for the treatment of leprosy) (https://directorsblog.nih.gov/2020/08/04/exploring-drug-repurposing-for-covid-19-treatment/). These findings suggest the use of existing and experimental drugs has the potential to treat COVID-19. Likewise, immune-informatics tools can be employed to recognize immunological active sites in the viral genome for the purpose of developing epitope-based vaccine candidates. Such vaccines have several advantages over whole organism-based vaccines as they are safe and easy to produce [95] , [96] , [97] . Peptide-based vaccines significantly limit reactogenic and allergenic complications, and triggers stimulation of B cells and T cells or both simultaneously [96] . There are low batch to batch differences in peptide-based vaccine production and can be easily standardized [98] . The peptide structure is well known and structure-function can be easily correlated in contrast to a traditional vaccine. Furthermore, the peptide can be easily formulated to conjugated structures and multi-epitope vaccines [98] .
Despite the many benefits of using peptide vaccines, they possess lower immunogenic ability which can be overcome by fusing adjuvants in peptide vaccine formulations [96] . Such peptidebased vaccines are currently being considered and hold substantial promise to prevent human viruses like hepatitis C virus and HIV [96] , [97] . In the work reported herein, an integrated study of computational drugs leads to identification and peptide-based vaccine ensemble design against SARS-COV-2 N protein is performed. The scientific and medical hunt for COVID-19 drugs and vaccines is in full bloom to stop the pandemic and subsequent waves of virus spread. This can be witnessed by an exponential number of COVID-19 publications in journals or in preprint archives (around 22, 000 in PubMed and 5000 preprint in BioRxiv/MedRxiv) covering SARS-COV-2 drugs and vaccines directly or indirectly [99] . Making the process of data sharing and joint global push is much needed for prioritizing drug and vaccine candidates, clinical trial streamlining. Coordinating on a regulatory process might be a prompt way of dealing with this deadly and ghastly virus. Finally, we acknowledge several limitations of this work due to lack of experimental support but nevertheless these results are promising and can save time and resources for scientific personnel involved directly in experimental therapeutics design. In this way, the translational distance between preclinical and clinical products might be reduced considerably and will aid in paving a path for rapid practical development of drugs and the much anticipated vaccine.
The following are the supplementary data related to this article.
Supplementary File: S- Fig.1 . Superimposition of minimized SARS-COV-2 N protein structure (blue) over pre-minimized SARS-COV-2 N protein structure (red). S- Fig.2 2D structures of the top 5 compounds shortlisted after virtual screening. S- Fig.3 . Population coverage by the selected set of epitopes. S- Fig.4 . Cluster analysis of MHC I HLA alleles. Red color presents stronger interaction whereas yellow color stands for weaker interactions. S- Fig.5 . Cluster analysis of MHC II HLA alleles. Red color presents stronger interaction whereas yellow color stands for weaker interactions. S- Table 1 . List of reference alleles used for prediction of epitopes. S- Table 2 . Pair of residues selected for disulfide engineering. S- Table 3 . MMGBSA binding free energies for N protein and drug molecule complex.
J o u r n a l P r e -p r o o f S- Table 4 . MMPBSA binding free energies for N protein and drug molecule complex. S- Table 5 . MMGBSA binding free energies for TLR3 and vaccine complex. S- Table 6 . MMPBSA binding free energies for TLR3 and vaccine complex.
The authors in this study have no conflict of interest.
|
expressing adenovectors and adjuvant in the same MNAs resulting in a vaccine that induced both antibody responses and enhanced cytotoxic cellular immunity that is likely important for "universal" vaccines and cancer immunotherapies.
Taken together, these and studies by others demonstrate the potential for the development of cutaneous immune engineering strategies to control systemic immune responses including the potential for developing novel vaccine strategies and immunotherapies, and even negative immunization strategies to treat systemic allergy and autoimmune diseases. Advances in skin biology are making important contributions to the fight against the COVID-19 pandemic demonstrating once again that dermatology is more than skin deep.
|
Betacoronavirus named 2019-nCoV by the WHO and severe acute respiratory syndrome (SARS)-CoV-2 by the International Committee on Taxonomy of Viruses.
As of March 28, 2020, the spread of COVID-19 in Italy has affected primarily people over 50 years of age (1) (Instituto Superiore di Sanita). The mortality rate appears to be higher for elderly patients. In fact, for people between 70 and 79 years of age, the fatality rate has been 18.5 percent. Conversely, for patients below 50 years old the fatality rate has been less than 1%. For patients older than 80 years the mortality has been about 25 percent. The WHO-China report (2) indicated that at the end of February 2020 the Death Rate (number of deaths/number of cases), the probability of dying if infected by the virus (%), was below 1% for all groups below 50 years old, 1.3% for 50-59 years old, 3 .6% for 60-69 years old, 8% for 70-79 years old and 14.8% for 80+ years old. On March 26 the CDC reported that the case-fatality percentages in the U.S. similarly increased with age, with <1% deaths among persons aged 20-54 years, 1-3% among persons aged 55-64 years, 3-11% among persons aged 65-84 years, reaching 10-27% among persons aged ˃85 years old (3) .
Reports from China indicate that men accounted for 60% of COVID-19 patients (4, 5) and that the COVID-19 fatality rate for men was 2.8%, compared to 1.7% for women (6) . Moreover, 67% of patients admitted to the intensive care unit (ICU) were reported to be men (7) . In Italy, 70% of those who died were men (8) . In France, 73% of ICU admissions for COVID-19 have been men (9) (Sante Public France). In Norway, that figure is 75% (10) (Norwegian Institute of Public Health), and in UK it is 71% (11) (ICNARC). A Washington Post analysis of U.S. deaths so far also found that nearly 60% of deaths were male (12). These data seem to indicate that there might be a gender predisposition to COVID-19, with men predisposed to being more severely affected (13) and older men accounting for most deaths.
To explain the biological basis of this gender predisposition to COVID-19, we should consider the complex biological network of testis-formed androgen, comorbidities, and infection-induced inflammation, that impact several critical organs and metabolism in the aging male creating a permissive environment for SARS-CoV-2 to exert a lethal effect. Moreover, it is also likely that SARS-CoV-2 may also exert direct effects on testicular function.
This article is protected by copyright. All rights reserved A systematic literature review was performed by drawing on biomedical literature and academic electronic databases PubMed, ScienceDirect, and Google Scholar as well as government and public health organization web sites on T, estrogen, aging, inflammation, severe acute respiratory syndrome (SARS) due to coronavirus (CoV) 2 (SARS-CoV-2) infection, and COVID-19 disease state and outcomes. Articles included were written either in English or French.
Made by the Leydig cells in the testis, testosterone (T) drives the establishment and function of the male reproductive system from gestation to adulthood (14) . However, T levels decrease at a rate of 0.4% to 2% per year starting at around age 30 and can result in low levels of serum T, termed "hypogonadism", at advanced ages (15) (16) (17) . Primary hypogonadism is defined as reduced Leydig cell androgen production, when LH levels have increased, but the testicle is not responding to pituitary stimulation and producing T. In secondary hypogonadism, gonadotropin releasing hormone (GnRH) or LH levels are reduced and become inadequate to maintain T levels. In some men, there is a mix of central (hypothalamic and/or pituitary) and gonadal deficiencies.
The progressive decline in T with aging results in 20% to 50% of men over age 60 having significantly reduced T levels (18, 19) . Age-related decline in T, along with associated symptoms, is referred to as late-onset hypogonadism (LOH) (20, 21) . LOH is symptomatically characterized by loss of libido, erectile dysfunction, and loss of muscle mass, among other symptoms, as well as a greater likelihood of both metabolic syndrome and cardiovascular disease. Moreover, there is a significant interaction between low T levels and frailty (22) as well as mortality in older men (23) , the later been more pronounced in older men with metabolic syndrome (24) . Recently the concept of functional hypogonadism has emerged. This is defined and diagnosed as the coexistence of an androgen deficiency phenotype and low serum T concentrations occurring in the absence of both intrinsic structural deficiencies and/or pathological conditions that suppress the hypothalamic-pituitarygonadal axis (25) .
Older age has also been associated with a host of co-morbidities. Older men exhibit higher prevalence of metabolic syndrome, obesity, diabetes mellitus, and other chronic health conditions that can cause hypogonadism (16) . Similarly, these same co-morbidities predispose older men to a higher incidence of severe disease and mortality. As several studies have noted, a number of causes Accepted Article of hypogonadism in older men are modifiable and hypogonadism can be reversed with weight loss or better control of disease (26) . Thus, low T levels in older men can be attributed not only to age, but also to the presence of co-existing co-morbidities and risk factors (16, 17, 27) . However, a series of studies showed that androgen deprivation in prostate cancer patients induces components of the metabolic syndrome (28-30) and administration of T to hypogonadal men improves insulin resistance, obesity and dyslipidemia (23, 31) .
Taken together these data suggest that low T seen in aging men could act as either a mediator or simply a confounder of the observed co-morbidities.
T replacement therapy (TRT) with exogenous T is currently the only FDA approved therapy for the treatment of male hypogonadism. Although this treatment does have associated risks (32-37), the indications, treatment, and follow-up for men being treated with T are well-defined and generally considered safe for treatment of male hypogonadism, LOH/functional hypogonadism (25, 38) .
The shift of sex steroid profile in women with a marked decline in ovarian estrogen production defines the menopausal transition. After menopause, the ovaries no longer produce estrogen and at that stage estrogen mainly come from fat tissue. In women, androgens are produced by both the ovaries and the adrenals (39, 40). In women, serum T levels decline from 20 to 40 years of age, reaching a 50% decline (40-42). The same is true for the androgen dehydroepiandrostenedione sulfate and androstenedione. There is no further decline of T following the final menstrual period (43) . In elderly 60 to 80 years old women, there is an increase in serum T reaching levels, although the extent of this increase is highly variable (40, 41). Interestingly, although T was shown to play a role in frailty in older men, its role in older women seems to be less relevant (22) .
Thus, although the age-associated decline in T occurs in both men and women, it starts much earlier in men and continues throughout the life while it reaches a maximal decrease before the menopausal transition in women. In older women, unlike men, there is a recovery in circulating T levels which although variable sometimes reaches levels close to those found in the young (41) . Moreover, the overall androgenicity ratio (androgen/estrogen) is increased with age in women since the decline of estrogen formation is dramatic (41, 43) . This increase in androgenicity is likely behind the phenotypic Accepted Article changes seen in postmenopausal women, e.g., hirsutism (44) . Whether this increase in T and androgenicity contributes to less severe outcomes and deaths in elderly women, compared to men, remains to be explored.
The immune system is designed to protect the body against foreign pathogens in a rapid and specific manner. It does this in a highly complex manner, continuously distinguishing between normal, healthy cells and unhealthy (because of infection of cellular damage) cells. When the immune system recognizes these "unhealthy" cells, it responds and addresses the problem. However, it is possible that dysregulation may exist, and the immune system may auto-react, causing harmful effects. When the immune system response is activated without a real threat, allergic reactions or autoimmune diseases can result.
Inflammation is the immune system's defense mechanism against pathogens and other harmful stimuli. It can be either at the origin or the consequence of diseases that occur in the cardiovascular, respiratory and reproductive systems. Inflammation is caused by the activation of innate and adaptive immune cells and proteins released by these cells.
Aging is associated with a progressive decline and remodeling of the immune system, resulting in an increased risk of severe outcomes from infectious diseases (45, 46) . Aging is associated with elevated systemic inflammation (e.g., elevated serum concentrations of IL-6 and tumor necrosis factor α (TNFα)), as well as a decreased ability to respond to specific immunological challenges (47) (48) (49) . This general impairment of overall immune function and increased inflammatory response, are responsible for increased mortality in aging. Indeed, individuals over the age of 65 account for most of all influenza-related hospitalizations and over 70% of all influenza-related deaths (50, 51) .
Moreover, among individuals 80 years or older, males are more likely than females to be hospitalized and succumb to influenza virus infections (52, 53).
Sex steroids, such as T, can have a significant influence on the function of inflammatory cells and regulation of the immune response. T has major effects on health and disease by altering metabolic, cardiovascular, and immune functions (54) (55) (56) . T exerts mainly a suppressive role in immune functions, acting on the androgen receptors in immune cells regulating target gene expression (57) .
T suppresses immune cell activity by reducing inflammatory and promoting anti-inflammatory mediators' expression by macrophages and T cells, thus protecting against a variety of inflammationmediated diseases (57, 58) .
The role of T in the pathophysiology of inflammation was reviewed by Traish et al. (59) . Low T levels have been associated with higher rates of infection-related hospitalizations and all-cause mortality in male hemodialysis patients (60) . Likewise, age-related decline in T levels have been associated with increased mortality and disease severity following influenza infections (61) . In addition, TRT administration in aged male mice decreases mortality and reduces disease severity independent of changes in viral replication or pulmonary inflammation (61) . In humans, there is data suggesting that males develop a lesser antibody response to influenza vaccination than females, and T may play a role in this (62) .
Interleukin-6 (IL-6) is a proinflammatory tightly regulated cytokine that is expressed at low levels, except during infection, trauma, or other stress. T is among the factors that have been shown to down-regulate IL-6 gene expression. Low T levels in young men have been associated with lowgrade systemic inflammation, and are believed to be part of the mechanism underlying adverse health outcomes in male hypogonadism (63) . IL-6 levels are elevated in LOH, even in the absence of infection, trauma, or stress. The age-associated increase in IL-6 is believed to account for some of the phenotypic changes of advanced male aging, particularly those that resemble chronic inflammatory disease (47) . Moreover, the levels of the inflammatory marker soluble IL-6 receptor are also increased in older men (64) and higher IL-6 levels have been shown to inhibit androgen formation (65) . TRT in hypogonadal men with chronic inflammatory conditions has been shown to reduce levels of TNFα, another inflammatory cytokine (66) , suggesting that T may attenuate the inflammatory process and reduce the burden of disease. TRT was also shown to reduce the spontaneous, but not the inducible, production of inflammatory cytokines by monocytes (67), suggesting that the effect of TRT might be limited. In a recent paper, Bianchi reviewed the literature on the anti-inflammatory effects of T (68) . The author concluded that in general, T protects against inflammation independently of the clinical condition, although TRT was more effective in reducing inflammation in hypogonadal than eugonadal men. Elevated IL-6 is a characteristic biomarker seen in the serum of patients infected with COVID-19 (69-Accepted Article 71 ). Reports on the use of tocilizumab in COVID-19 cases, at doses similar to those used for the management of cytokine release syndrome, have shown rapid improvement in patients (72, 73) . In these reports, the expeditious administration of anti-IL-6R therapy for patients in acute respiratory distress syndrome (ARDS) has been critical. The Society for Immunotherapy of Cancer encourages the use of IL-6 or IL-6-receptor blocking antibodies like tocilizumab (Actemra, Roche-Genentech), sarilumab (Kevzara, Regeneron), and siltuximab (Sylvant, EUSA Pharma) that are FDA-approved for various pro-inflammatory conditions seen with cancers (74).
Liu et al. has reported that about half of the admitted COVID-19 patients developed ARDS (75) . Half of the ARDS patients died, and elderly patients were more likely to develop ARDS. Critical illnesses, like ARDS, are associated with neuroendocrine changes linked to increased morbidity and mortality (76) . Critical illnesses progress in two steps, the acute (hours to days) and chronic (weeks) stages.
Cytokines are thought to be responsible for early response changes, and other endogenous (e.g. dopamine, cortisol) and exogenous (e.g. medications) factors contribute to the chronic changes (76, 77) . During the early phase, T levels decrease and continue to decline during the chronic phase. T reduction seems to be independent of LH levels, indicating a dysfunction of the hypothalamicpituitary-gonadal axis (78, 79) . Fifty percent of men over 65 years old hospitalized for acute illness, such as respiratory tract infection, were found to be hypogonadal and low T levels were related with in-hospital mortality (80, 81) . ARDS, a complication of severe sepsis, and sepsis-related morbidity and mortality were shown to be more prevalent in men than in women, were linked to high IL-6 levels and occurred in a manner independent of age and disease (82) . Low T levels were reported in male patients with severe sepsis and respiratory failure (83, 84) suggesting that hypogonadism may create a permissive environment for severe outcomes in men. T is among the pharmacological interventions available to attenuate the catabolic response in critical illness (85) . Indeed, low T is part of the proinflammatory profile associated with ARDS (86), and TRT has been shown to reduce airway inflammation in asthma (87) .
As noted above the interactions between aging, low T, comorbidities, and inflammation create a fertile environment for COVID-19 negative outcomes. However, it is also likely that SARS-CoV-2 may also exert direct effects on testicular function and T formation. Such an effect will exacerbate
This article is protected by copyright. All rights reserved the effects of the virus on the aging male. The virus uses a glycosylated spike (S) protein to enter host cells and binds with high affinity to the angiotensin-converting enzyme 2 (ACE2), acting as viral receptor in humans (88, 89) . Transmembrane serine protease 2 (TMPRSS2) appears to enhance ACE2-mediated viral entry (90) . The ACE2 enzyme is predominantly expressed in heart, kidney, and testis, and in lower levels in the bronchus and lung parenchyma, suggesting that it may play a critical role in cardiovascular, renal and testicular function, as well as in the respiratory system (91, 92) .
Further studies have shown ACE2 to be abundantly expressed in the epithelial of the lung and serves as the receptor for the SARS coronavirus, the causative agent of SARS (91) . There is currently no published data looking at the effect of COVID-19 on testicular function, although the possibility has been brought forward (92) . Interestingly, the testis has been found to be the target organ for other viruses, including HIV, HBV, mumps, Zika, Ebola and the SARS coronavirus, causing orchitis, and occasionally hypogonadism, oligospermia and testicular tumors (93) (94) (95) (96) (97) (98) (99) (100) (101) (102) (103) .
A search of the protein atlas database (104) showed that ACE2 RNA is expressed at high levels in male tissues and that the protein is expressed at extremely high levels in male tissues. Both ACE2 protein and mRNA are present at the highest levels in the testis in human males. Early studies by Douglas et al. showed that ACE2 expression in the human testis is confined to Leydig and Sertoli cells and that it is a constitutive, luteinizing hormone (LH)-independent, product of adult Leydig cells (105) . Our findings have corroborated these, and our laboratory has found that ACE2 transcripts are present in human Leydig-like cells differentiated from human induced pluripotent stem cells (106) . ACE2 expression is driven by the transcription factor FoxA3 (107), the only member of the Fox family of transcription factors present in adult Leydig cells (108, 109) . The TMPRSS2 transcript was also found in human Leydig-like cells (106) . These data suggest that testicular Leydig cells may be potential targets of COVID-19.
T is probably not the only reason why more elderly men die of COVID-19 than women. Women have more robust immune systems than men and estrogen as well as genes encoded on the X chromosome may play a positive role in the development of this response (110) . The complex immunomodulating role of estrogen has been reported (58, 111) and their potential role in COVID-19 has been evoked, and clinical trials are underway to evaluate their impact on the severity of COVID-19 symptoms. Although high estrogen levels in premenopausal women may be protective against
This article is protected by copyright. All rights reserved COVID-19 infection, if estrogen were the primary protective factor for women, elderly postmenopausal women with COVID-19 would fare as poorly as elderly men. However, the absence of estrogen later in life does not seem to play a detrimental role in the severity of the response and outcomes of elderly women to COVID-19 compared to men. Sepsis provides the opposite model because if estrogen are protective in older men, this a rare instance where endogenous estrogen are increased. Indeed, during the acute phase of severe sepsis in men with respiratory failure in parallel with the dramatic decline in T levels there is an increase in estrogen, mainly estrone, levels, likely due to increased aromatization of testicular or adrenal androgen (83, 84, 112) . Increased estrogen levels reflect a negative outcome in sceptic shock in men (84, 112) . Nevertheless, long-term treatment with estrogen, same as T, administered at youth levels may offer protection.
Gender-related lifestyle behaviors may also play a role (110) . For example, smoking is more prevalent in men than women although the limited evidence linking smoking and severity of COVID-19 is weak (13) . Genetic predisposition has also been evoked as a susceptibility factor to COVID-19 (113) .
Collectively, these data suggest low T levels may facilitate the severity of COVID-19 infection in aging men and that the testis, and most likely the Leydig cells, is a putative target organ for COVID- 19 . It may also stand to reason that normal T levels may offer some protection against COVID-19. At present it is not known whether COVID-19 infection could inhibit androgen formation, or if it is the low T levels in hypogonadal men, e.g. LOH/functional hypogonadism, that create a permissive environment for the virus to act. However, in support of our hypothesis, a paper published during the revision of this manuscript reported clinical data showing the prognostic role of T levels toward either severity or mortality associated with SARS-CoV-2 pneumonia (114) .
The situation with COVID-19 is still evolving and only when the pandemic is over, we will have a better characterization of the population at risk and various factors (including medications and genetics) that impart increased risk of infection and mortality. Furthermore, only when the dust settles, we will learn if the disease truly is more detrimental to elderly men compared to women.
However, considering the knowledge available today on COVID-19 and in the absence of preventive or therapeutic alternatives, there are three questions that emerge: (i) Would monitoring testicular
This article is protected by copyright. All rights reserved function with serum T, LH, Follicle-stimulating hormone and sex hormone binding globulin levels in COVID-19 infected male patients allow for stratification of risk to identify and treat patients who may go on to develop ARDS or other critical illness? (ii) Would treatment with TRT, in combination with the other treatments, boost the immune system and improve overall outcomes in aging men infected with COVID-19? (iii) Does COVID-19 infection affect short and long-term male reproductive function?
Given the preponderance of mortality in males, additional testing and treatment may be merited, particularly since the required tests and treatment are already available.
Snyder
|
The COVID-19 pandemic is an international public health emergency on a scale unprecedented in the last century. This article describes how NICE responded to the pandemic to support colleagues across health and care nationally, and contribute to the global response. It is an account that will be familiar to many during the pandemic, as it is one of adaption and flexibility, navigating with information of varying amounts and quality, and of people coming together in difficult times.
In early Spring, as case numbers began to quickly rise in Europe, NHS England and NHS Improvement (NHSE&I) asked NICE to develop rapid guidelines and evidence summaries as part of the national response. 1 Produced with input from NHSE&I, these resources would complement other agencies' outputs and, importantly, be relevant and accessible to healthcare staff at the frontline. 2 For many in health and social care, NICE is synonymous with evidence-based guidance produced through an extensive, transparent and rigorous process. However, such guidelines typically cover a pathway of care from diagnosis and treatment to longer term management, as seen in guidelines for dementia, diabetes, and alcohol-use disorders. [3] [4] [5] Answering between 15 to 20 review questions, the process takes up to 2 years to complete. In the early stages of a public health emergency with limited evidence and time available, a different approach was needed. NICE rapidly adapted, developing interim methods and processes, along with a new style and scale of output. This enabled the production of a suite of COVID rapid resources in a matter of weeks. 6 At the core of these resources is a programme of rapid guidelines, which focus on three to six review questions and summarize existing evidence and expert consensus. These answer difficult questions facing health and care workers in the pandemic, such as whether, for whom, and in which circumstances immunosuppressive treatment should continue, or how to manage end of life symptoms of COVID in the community. [7] [8] [9] Over time, 21 COVID rapid guidelines have been developed, outlining how to care for people with COVID, how to support and modify care for people with specific at-risk conditions, and how to provide services during the pandemic. 2
Developing rapid guidance in such a context walks a careful line between rigour and speed. The steps of full guideline production are detailed in Figure 1 . 10 Although these steps were maintained, for the resources to be available quickly, the process needed to be tailored to the circumstances. Steps were run in parallel rather than sequentially, with some components accelerated, for example by live editing documents with clinical experts on Microsoft Teams, or adapted to the constrained circumstances, such as targeted consultation when broader public consultation and open recruitment to expert panels was not feasible. 1, 6 Furthermore, the rapid guidance is kept under review with a tailored approach to surveillance and updating. 2, 6 With limited evidence and time available, experience was key. The rapid guidelines draw on contributions from patient organizations, professional societies and royal colleges, who shared their expertise through targeted consultation and provided hundreds of comments in timescales sometimes as tight as 6 hours. 1,2 Clinicians with first-hand frontline experience of the pandemic assisted in interpreting information and relating it to practice, such as Intensive Care specialists for the critical care and pneumonia rapid guidelines. 11, 12 Within NICE, the process was facilitated by experienced multidisciplinary staff, particularly those with previous clinical experience, and drew on a huge work effort familiar to others responding to the pandemic, with 130 people on rotation covering long hours of intensive work in the development programme. 1
As the pandemic progresses, a rapidly increasing flow of information is generated by efforts to advance understanding of the diagnosis, treatment and longer term management of COVID. For example, over 2000 papers were considered during the production of the rapid guideline on arranging planned care in hospitals and diagnostic services, with 54 finally included, reflecting a rapidly growing evidence base of variable quality relating to COVID. 13 NICE therefore welcomes collaborations with international initiatives facilitating the rapid development and sharing of high-quality systematic reviews and guidance; which focus efforts on global needs and where most value can be added, while reducing research waste and duplication of efforts. 1 These include the World Health Organization's Evidence Collaborative, Cochrane's work to progress COVID-19 related reviews, such as their contribution in prioritizing questions and refining methods for rapid development, and the COVID Evidence Network to support Decision-making (COVID END). [14] [15] [16] Furthermore, to assist the gathering of best possible data and evidence in primary research, guides and evidence standards are available on the NICE website for COVID-19 diagnostic tests and medicines. 2 The organization is also providing free scientific advice for researchers developing novel diagnostics or therapeutics for COVID-19. 2
A key role of NICE is providing national guidance and advice about health and social care, with the organization and its staff proud to develop resources to support colleagues across the sector during the pandemic. This work continues, with an ongoing programme of COVID resources drawing on recent approaches and more established NICE methodologies, and the production of an interim process and method guide for guidelines developed in response to health and social care emergencies. 2, 6 Yet the experience also highlights the challenges of developing rapid trustworthy information to guide practice. Learning will therefore involve identifying what can be applied in the future, from day to day practice to responding to other health and social care emergencies, including NICE's future contributions to a national system wide pandemic response.
|
N ipah virus (NiV) and Hendra virus (HeV) are related zoonotic paramyxoviruses, belonging to the Henipavirus (HNV) genus. They cause severe encephalitis and respiratory illness, with fatality rates of 50-100%, and are classified as biosafety level 4 (BSL-4) select agents 1 . Unlike several other paramyxoviruses, HNVs have a broad species tropism (as is the case for canine distemper virus 2 ) and can infect animals from at least six mammalian orders 1 . Pteropid fruit bats (flying foxes) appear to be the predominant natural reservoir hosts of HNVs 3 . Since the first outbreaks of HeV in Australia in 1994 and of NiV in Malaysia in 1998, HeV repeatedly infected horses in Australia with resultant human exposures 4 , while food-borne mediated NiV spillovers have occurred nearly every year in Bangladesh 5 . Furthermore, NiV outbreaks have occurred in the Philippines and in India 6 . Besides Asia and Oceania, the detection of anti-HNV antibodies in humans and Pteropus bats in Africa, a continent in which no documented NiV or HeV outbreaks have occurred, further suggests that future HNV zoonotic emergence is likely to happen 7 . Although more than two billion people live in regions threatened by potential HNV outbreaks, there are no clinically approved vaccines or specific therapeutics against these pathogens.
Paramyxoviruses deliver their genome to the host cytoplasm by fusing their lipid envelope with the cellular membrane to initiate infection. This process requires the concerted action of two surface glycoproteins, attachment (G/H/HN) and fusion (F), which sets the paramyxovirus entry machinery apart from all other class I fusion proteins. G is a type II homotetrameric transmembrane protein with an ectodomain comprising a stalk and a C-terminal β-propeller head, and the latter domain is responsible for binding to ephrinB2 or ephrinB3 (ephrinB2/B3) receptors [8] [9] [10] [11] [12] . F is a homotrimeric type I transmembrane protein that is synthesized as a premature F 0 precursor and cleaved by cathepsin L during endocytic recycling to yield the mature, disulfide-linked, F 1 and F 2 subunits [13] [14] [15] . Viral fusion proteins are believed to exist in a kinetically trapped metastable conformation at the virus surface 16 . Upon binding to ephrinB2/B3, NiV G has been proposed to undergo conformational changes leading to F triggering and insertion of the F hydrophobic fusion peptide into the target membrane 8, 17, 18 . Subsequent refolding into the more stable postfusion F conformation drives merger of the viral and host membranes to form a pore for genome delivery to the cell cytoplasm, as shown for other paramyxoviruses and pneumoviruses 13,19-24 . Paramyxovirus G/H/HN and F glycoproteins are the main targets of the humoral immune response and neutralizing antibodies are the key vaccine-induced protective mechanism against measles virus, for example 25 . Among the few well-characterized anti-HNV G neutralizing monoclonal antibodies (mAbs), m102.4 was previously isolated from a naïve human phage display library and shown to neutralize all known strains of NiV and HeV [26] [27] [28] . Moreover, m102.4 protected ferrets and African green monkeys from HeV/NiV lethal challenges when administered up to several days post infection [29] [30] [31] [32] . Anti-F mouse polyclonal antibodies and hybridoma-secreted mAbs were shown to protect hamsters from NiV and HeV challenge [33] [34] [35] . We further demonstrated that immunization of mice with prefusion NiV F or HeV F led to strong homotypic serum neutralization titers, with lower heterotypic titers, whereas postfusion F failed to elicit a robust neutralizing response 17 . So far, no information is available about the epitopes recognized by HNV F mAbs and their potential for use as therapeutics in humans or to guide reverse vaccinology initiatives.
We previously isolated a hybridoma secreting a murine mAb that recognizes prefusion NiV F and HeV F glycoproteins, which we designated 5B3 17 . We report here the cloning, sequencing and humanization of 5B3 (h5B3.1) and demonstrate it bound with high affinity to prefusion NiV F and HeV F. Neutralization assays, carried out under BSL-4 containment, showed that 5B3 and h5B3.1 potently inhibited NiV and HeV infection of target cells. We determined a cryo-electron microscopy (cryo-EM) structure of the NiV F trimer in complex with 5B3 and found the antibody binds to a prefusionspecific quaternary epitope that is conserved in NiV F and HeV F. Our structural data, combined with F-triggering and membrane fusion assays, demonstrate that 5B3 locks F in the prefusion conformation and prevents membrane fusion via molecular stapling, providing a molecular rationale for its potency. These results define a critical neutralization epitope on the surface of the NiV and HeV F glycoproteins and pave the way for the future use of h5B3.1 for prophylaxis or as therapeutic for NiV-and HeV-infected individuals.
5B3 and h5B3.1 antibodies potently neutralize NiV and HeV. To understand the humoral immune response directed at the HNV F glycoprotein, we cloned and sequenced the 5B3 neutralizing mAb from hybridomas previously obtained upon mice immunization with a prefusion NiV F ectodomain trimer 17 . The resulting antibody was subsequently humanized (and designated h5B3.1) to enable future therapeutic use in humans. We used biolayer interferometry to characterize the binding kinetics and affinity of the 5B3 and h5B3.1 Fab fragments to prefusion NiV F and HeV F ectodomain trimers immobilized on the surface of biosensors. The 5B3 Fab bound to HeV F/NiV F with equilibrium dissociation constants of 4.6-10 nM, compared to equilibrium dissociation constants of 31-61 nM for interactions with the h5B3.1 Fab (Fig. 1a -d and Supplementary Table 1 ). Analysis of the determined association and dissociation rate constants indicate that the weaker binding affinity of h5B3.1, compared to 5B3, largely resulted from an enhanced dissociation rate of h5B3.1 (Supplementary Table 1) .
Subsequent neutralization assays were carried out using authentic NiV-Malaysia (NiV-M), NiV-Bangladesh (NiV-B) and HeV virions under BSL-4 containment. Plaque reduction assays were performed to analyze the neutralization of viruses pre-incubated with varying amounts of 5B3 or h5B3.1 antibodies. We determined mean half-maximal inhibitory concentrations of 1.2 µg ml −1 (5B3) and 0.9 µg ml −1 (h5B3.1) for NiV-M, 1.3 µg ml −1 (5B3) and 0.6 µg ml −1 (h5B3.1) for NiV-B and 1.4 µg ml −1 (5B3) and 1.3 µg ml −1 (h5B3.1) for HeV ( Fig. 1e-g) . These results show both 5B3 and h5B3.1 potently inhibited infectious NiV and HeV, the two HNVs responsible for recurrent outbreaks of lethal encephalitis and respiratory diseases in humans.
Cryo-electron microscopy structure of 5B3 in complex with the NiV F glycoprotein. To elucidate the mechanism of 5B3-mediated neutralization of NiV and HeV, we determined a cryo-EM structure of a stabilized NiV F ectodomain trimer in complex with the 5B3 antibody Fab fragment at 3.5 Å resolution (Fig. 2a,b , Table 1 and Extended Data 1). To assist model building, we also crystallized the isolated 5B3 Fab fragment and determined its structure at 1.5 Å resolution using X-ray crystallography ( Table 2 ). In agreement with the features observed in the cryo-EM map, the local resolution is highest for most of the NiV F and the 5B3 variable domains, including the interface between NiV F and 5B3, whereas the Fab constant domains are poorly resolved, due to elbow flexibility between constant and variable domains, and were not modeled. The final model includes NiV F residues 27-480 with a chain break between residues 105 and 112. 5B3 binds to a quaternary epitope on domain III of the NiV F globular head, with a stoichiometry of three Fabs bound to an F trimer (Fig. 2a,b , Table 1 and Extended Data 1 and 2).
The cryo-EM map resolves the four N-linked oligosaccharides present on each NiV F protomer (at positions Asn67, Asn99, Asn414 and Asn464) and reveals 5B3 recognizes a glycan-free epitope on the F surface. We could not detect an oligosaccharide at position Asn64 in the cryo-EM reconstruction (Fig. 2a,b) , in agreement with previous biochemical studies [36] [37] [38] . All six complementarity-determining regions (CDRs) (and the light chain framework region 2) contribute to the paratope and bury 980 Å 2 at the interface with the NiV F epitope, which mostly resides within one protomer (Fig. 3a,b) . CDRL1 contacts the NiV F heptad-repeat A (HRA) β-hairpin via electrostatic interactions between Gln27 5B3 and Lys160 F , Gln162 F and Thr168 F (Fig. 3a -c, Extended Data 2). CDRL1, CDRL3 and CDRH3 bind to the core β-sheet in domain III via contacts with both F2 (residues 53-55) and F1 (residues 282-285) subunits (Fig. 3a-c) . CDRH2 protrudes at the interface between two NiV F protomers and interacts with a segment C-terminal to the central helix and with the upstream helix of a neighboring protomer (Fig. 3a -c and Extended Data 2). Comparison with the unliganded NiV F structure reveals that 5B3 binding induces a local reorganization (or stabilizes a different conformation) of the HRA β-hairpin (residues 160-170) and of residues 248-252 (Extended Data 3).
5B3 relies on an atypical binding mode to NiV F with nearly equal contributions of the heavy (48%) and light (52%) chains to the antibody buried surface area. This, in part, results from the fact that CDRL1 is making a greater contribution to the paratope than the nine residue-long CDRH3 (268 Å 2 versus 190 Å 2 of buried surface area, respectively), in contrast with the canonical CDRH3-dominated antibody/antigen interfaces. To confirm these findings, we probed the NiV F binding ability of single-chain (scFv) chimeric constructs in which either the variable heavy (VH) or variable light (VL) h5B3 chains were replaced with an unrelated chain from a human scFv library. Although h5B3 scFv immunoprecipitated full-length NiV F, none of the scFv chimeras did, in agreement with the equivalent contributions to binding of the heavy and light chains observed in our structure (Fig. 3a,b and Extended Data 4a). The structural data were further validated using site-directed mutagenesis of selected residues participating in the NiV F epitope followed by 5B3-mediated immunoprecipitation to assess residual binding (Extended Data 4b). We also used the prefusion F specific 12B2 antibody as well as a cell-cell fusion assay to probe the conformational integrity of the F mutants analyzed (Extended Data 4b,c). The NiV F K55A substitution inhibited 5B3 recognition, which is probably explained by the loss of interactions with CDRL1 Trp32 5B3 and CDRL3 Tyr92 5B3 residues, as visualized in our structure. Furthermore, abrogation of 5B3 binding to NiV F L53D or L53S probably resulted from reduction of favorable interactions with CDRL1 Trp32 5B3 , CDRL3 Phe91 5B3 and CRH3 Tyr102 5B3 . Given that the tested mutants bound to the12B2 antibody and retained 40-100% of the wild-type F fusion activity (Extended Data 4b,c), we conclude that the observed loss of binding largely resulted from specific disruption of interactions with 5B3, without major effects on the overall F structure.
Analysis of the structure rationalizes the observed cross-neutralization of NiV and HeV because 35 out of the 39 NiV F residues buried upon 5B3 binding are strictly conserved. Variable positions are Thr81 NiV , Asn84 NiV , Thr88 NiV and Arg336 NiV , which are conservatively or semi-conservatively substituted to Ser81 HeV , Thr84 HeV , Ser88 HeV and Lys336 HeV , respectively (Fig. 3d ).
Isolation of a NiV neutralization-escape mutant. As for other RNA viruses, the high mutation rate of HNVs could yield variants able to overcome 5B3 inhibition. For example, we showed that passaging NiV or HeV with anti-HNV G antibodies led to the isolation of viral mutants escaping neutralization by the respective antibody 28, 39 . To assess the possibility of generating 5B3 neutralization-escape virus mutants, such as the aforementioned bindingdeficient F mutants identified by site-directed mutagenesis, we passaged authentic NiV for three rounds in the presence of 5B3 in BSL-4 containment. Plaque purified resistant viruses were isolated, and viral RNA from five NiV isolates was reverse-transcribed into cDNA for sequencing of the F gene. All five NiV escape mutants harbored the same F K55E substitution. This finding supports our aforementioned mutagenesis data because the NiV F K55A was completely defective in 5B3 binding. We recombinantly produced the K55E F mutant and observed it could not bind 5B3 while maintaining its ability to interact with 12B2 and to promote wild-type cell-cell fusion activity (Extended Data 4b,c). These experiments are in full agreement with our structural and biochemical data and show that NiV could escape 5B3 neutralization without affecting F-mediated fusion, although the impact of the identified substitution on viral growth is not known.
The 5B3 antibody inhibits F-mediated fusion. Our structural data suggest 5B3 prevents fusogenic conformational changes leading to membrane fusion by locking NiV F in the prefusion state. The antibody recognizes a discontinuous epitope, spanning two neighboring protomers, present only in prefusion F, based on the conformational 3a-c and 4a,b). Furthermore, CDRL1 interactions with the HRA β-hairpin hinders refolding of the latter motif to contribute to the formation of an elongated central helix, observed in the postfusion F state. This antibody-mediated molecular stapling strategy, involving simultaneous interactions with protein segments that are close to each other in prefusion F but far apart in postfusion F ( Fig. 4a,b) , is conceptually equivalent to the disulfide stapling approach implemented for stabilizing the prefusion conformation of measles virus F 40 , respiratory syncytial virus (RSV) F 41 , HeV F 42 and PIV F 43 glycoproteins. Finally, we predict unfavorable steric clashes would occur with a 5B3 antibody bound to a neighboring protomer upon F refolding.
To validate the hypothesis that 5B3 locks NiV F in the prefusion conformation, we used an in vitro F triggering assay entailing (1) cleavage of the wild-type NiV F 0 ectodomain trimer with trypsin, under limited proteolysis conditions, to recapitulate the in vivo cathepsin L-mediated production of F 1 and F 2 14,15 and (2) incubation at 50 °C to promote refolding of the trypsin-cleaved prefusion F trimer to the postfusion conformation 17, 19 . We previously showed peptides derived from the heptad-repeat B (HRB) sequence of NiV F or HeV F prevent completion of F refolding and are potent inhibitors of fusion and live virus infection 44, 45 . Furthermore, when the triggering assay was carried out in the presence of a biotinylated HRB peptide, a conformational intermediate of the fusion reaction was captured and could be used as a reporter of F activation. Using this approach, we demonstrate here that addition of 5B3 or h5B3.1 during the triggering assay blocked fusogenic conformational changes in a concentration-dependent manner, whereas a nonneutralizing control antibody (13G5), specific for postfusion F, did not ( Fig. 4c,d) . Subsequent antibody affinity purification (protein G) of the material not captured by the HRB peptide showed 5B3 (or h5B3.1) remained bound to F (Fig. 4d) , supporting the hypothesis that 5B3/h5B3.1 trap NiV F in the prefusion conformation. Finally, capture of an F intermediate could be partially rescued by raising the temperature to ≥60 °C, indicating that binding of 5B3/h5B3.1 stabilizes NiV F by raising the energy barrier for transitioning to the postfusion state (Fig. 4e ). In summary, the structural and biochemical data presented here show 5B3/h5B3.1 inhibited fusogenic conformational changes by locking F in the prefusion state and raising the free energy of activation for fusion triggering.
To further study the mechanism of action of 5B3/h5B3.1 in the context of a full-length, membrane-embedded F glycoprotein, we carried out cell-cell fusion assays in the presence of varying concentrations of mAbs. We observed that 5B3 and h5B3.1 prevented NiV F-and HeV F-mediated membrane fusion in a concentration-dependent manner, consistent with the expectation that trapping F in the prefusion conformation actually resulted in inhibition of membrane fusion (Figs. 4f,g and 5).
The paramyxovirus and pneumovirus F glycoproteins are key players of viral entry, promoting fusion of the viral and host membranes through large-scale structural rearrangements 13,20-24 . The F conformation presented to the immune system can be a major determinant of the antibody response elicited by these glycoproteins. Previous work showed that most of the RSV-neutralizing activity in human serum is conferred by antibodies specifically recognizing prefusion F 46 . Structure-based physical stabilization of the RSV F prefusion state via mutations and fusion to another protein (domain) as well as multivalent display on a computationally designed nanoparticle platform correlated with increased elicitation of neutralizing mAbs 22, 41, 47, 48 . Prefusion stabilized F also induced greater neutralizing humoral immune responses than postfusion F against parainfluenza viruses in mice and rhesus macaques 43 . However, antibodies present in the sera of mice immunized with human metapneumovirus prefusion or postfusion F ectodomain trimers bound similarly to either protein conformation and equally neutralized virus infectivity, demonstrating that prefusion and postfusion F share most neutralizing epitopes for this virus 49 . We previously established that fusing a GCN4 trimeric motif at the C-terminal end of the NiV F and HeV F ectodomains resulted in the production of prefusion stabilized trimers that could elicit a neutralizing antibody response in mice 17 . No HNV F mAb, however, had been characterized at the molecular level.
Here, we have sequenced and humanized the 5B3 neutralizing mAb and demonstrated its ability to cross-neutralize authentic NiV and HeV. We show 5B3 and h5B3.1 inhibited membrane fusion by locking F in the prefusion conformation upon binding to a conformational (quaternary) epitope, which is reorganized during the fusion reaction. This mechanism of action rationalizes the potent 5B3/h5B3.1-mediated neutralization of NiV and HeV entry into target cells and is reminiscent of D25 inhibition of RSV via binding to and stabilization of prefusion F 22 . These findings are also in line with the enhanced properties of RSV and parainfluenza virus prefusion-stabilized F glycoproteins as candidate vaccine immunogens compared to the corresponding postfusion F 41,43,48 . Accordingly, the previously developed disulfide-stabilized prefusion HeV F 42 , and the corresponding prefusion NiV F construct engineered here, bear the promise of eliciting stronger neutralizing antibody titers than GCN4-only stabilized F glycoprotein ectodomains, by preventing refolding to the postfusion conformation. Core β-sheet Fig. 3 | The 5B3 neutralizing antibody recognizes a conserved quaternary epitope on the NiV F glycoprotein. a, Ribbon diagram of the NiV F trimer in complex with the 5B3 Fab fragment. One F protomer is rendered in teal and the other two protomers in grey. Only one Fab fragment is shown for clarity. b, Molecular surface representation of the NiV F trimer with the 5B3 CDR loops shown as ribbons, highlighting the quaternary nature of the epitope. c, Enlarged view of the interface between NiV F and 5B3 with selected residues rendered as sticks. NiV F residues are colored teal with oxygen and nitrogen atoms colored red and blue, respectively. In a-c, the 5B3 variable heavy (VH 5B3 ) and light (VL 5B3 ) chains are colored purple and pink, respectively. d, Molecular surface representation of the NiV F trimer showing the 5B3 footprint colored by residue conservation among NiV F and HeV F glycoproteins. Conservative sub.: conservative substitution; semi-conserv. sub.: semi-conservative substitution. showing the 5B3 footprint colored violet. Upon refolding, the 5B3 epitope is reorganized. The latter model was obtained by threading the NiV F sequence onto the human parainfluenza virus 3 postfusion F structure 20 (PDB 1ZTM). c, 5B3 concentration-dependent inhibition of streptavidin-mediated pulldown of a biotinylated HRB/NiV F conformational intermediate complex in a triggering assay. The non-neutralizing control antibody (13G5), which is specific for postfusion F, had no effect. d, NiV F triggering assay carried out in the presence of 5B3 or h5B3.1 IgGs showing both mAbs prevented F fusogenic conformational changes. Subsequent protein G immunoprecipitation of F/5B3 and F/h5B3.1 complexes that were not pulled down by the biotinylated HRB peptide indicated the antibodies remain bound to F. e, Capture of an F fusogenic conformational intermediate could be partially rescued by raising the temperature to ≥60 °C, as detected by comparing streptavidin-mediated pulldown and protein G immunoprecipitation of F/5B3 and F/h5B3.1 complexes. In a and b, a single 5B3 epitope is colored for clarity. In all panels, precipitated samples were analyzed by western blotting using a rabbit anti-F antibody. f,g, NiV F (f) or HeV F (g) mediated cell-cell fusion could be inhibited by 5B3 or h5B3.1 in a concentration-dependent manner. D54 is an HIV envelope antibody used as negative control. Data shown are mean and s.d. for n = 2 technical replicates. Uncropped images for d and e are available in the source data online. So far, m102.4 is the only human mAb that has been used for HNV protection studies in ferrets and African green monkeys [29] [30] [31] . Murine antibodies have limited clinical use due to their short serum half-life, inability to trigger human effector functions and the risk of mounting an anti-mouse antibody response. We successfully engineered a humanized version of 5B3 (termed h5B3.1), which retained comparable breadth and potency to the parental mouse mAb and inhibited F-mediated membrane fusion. Therefore, similar to the anti-HNV G m102.4 neutralizing mAb, h5B3.1 could potentially be used for prophylaxis or for post-exposure therapy with individuals exposed to NiV or HeV. Between 2010 and 2017, m102.4 was used on a compassionate basis to treat individuals with significant HeV or NiV exposure risk in Australia, the USA and India (https://www. who.int/blueprint/priority-diseases/key-action/nipah/en/). These individuals showed no evidence of infection or known health complications after administration of the mAb. The fact that m102.4 was used in humans despite the lack of clinical trials or approval by the FDA (or equivalent agencies) emphasizes the urgent need for developing therapeutics and other counter-measures against highly pathogenic HNVs that have fatality rates of 50-100%.
Escape mutants have been isolated upon HNV passaging with m102.4 39 or with 5B3 (here), but they have never been observed during m102.4 in vivo efficacy tests against NiV or HeV, putatively due to the very high doses of antibodies utilized in those experiments in conjunction with the effective adaptive immune responses of the subjects. We postulate that similar outcomes could be expected with comparably high doses of 5B3/h5B3.1 mAbs. Furthermore, neutralization escape mutations to such an F-specific mAb could have a potentially negative impact on viral growth, replication and virulence, as observed with mutants obtained with anti-G antibodies 28 . Finally, the use of antibody cocktails has been proposed for Ebola virus 50-52 or severe acute respiratory syndrome coronavirus (SARS-CoV) 53 and implemented as a commercially available therapeutic for hepatitis C virus (XTL-6865, XTL Biopharmaceuticals) to prevent and/or limit the emergence of such mutants as well as enhance neutralization breadth. We suggest a similar strategy: combining h5B3.1 and m102.4 or other anti-HNV mAbs, targeting multiple antigenic sites on G and F, could be implemented for treating future NiV and HeV infections.
Any methods, additional references, Nature Research reporting summaries, source data, statements of code and data availability and associated accession codes are available at https://doi.org/10.1038/ s41594-019-0308-9.
expression medium (Life Technologies), cultured at 37 °C with 5% CO 2 and 150 r.p.m. HEK293T/17 is a female human embryonic kidney cell line (ATCC). HEK293T/17 cells (kind gift from G. Quinnan) were cultured at 37 °C with 5% CO 2 in flasks with DMEM + 10% FBS + penicillin-streptomycin + 10 mM HEPES. VeroE6 cells (ATCC) were grown in serum-free medium (VP-SFM, ThermoFisher) at 37 °C and 5% CO 2 . HeLa-USU and HeLa-ATCC (ATCC) cells 12 were maintained in DMEM (Quality Biologicals), supplemented with 10% Cosmic calf serum (HyClone), and 2 mM l-glutamine. HeLa-USU cells, ephrin-B2 and ephrin-B3 negative, (kind gift from A. Maurelli, Uniformed Services University) and HeLa-CCL2, ephrin-B2 positive (ATCC), have previously undergone cytogenetic analysis. Other cell lines were not authenticated. Cells were not tested for mycoplasma contamination.
Antibodies and peptides. The rabbit anti-F polyclonal antibody was produced by Spring Valley Laboratories using the NiV F ectodomain trimer fused to GCN4 17 as an immunogen. The horseradish peroxidase-conjugated rabbit anti-S-peptide antibody was purchased from Bethyl Laboratories. Anti-F murine monoclonal antibodies were produced as previously described 17 .
The N-terminal biotinylated NiV F HRB peptide (residues 453-488) 44 was synthesized by Global Peptide Services.
NiV F and HeV F construct. The NiV F and HeV F ectodomain constructs used for biolayer interferometry and NiV F triggering assay include the codon optimized NiV F (isolate UMMC1; GenBank sequence accession no. AY029767) or HeV F (isolate Horse/Australia/Hendra/1994) ectodomain (residues 1-487) fused to a C-terminal GCN4 followed by a factor Xa sequence and an S-tag (KLKETAAAKFERQHMDS) cloned in a pcDNA Hygro (+)-CMV+ vector for transient expression using FreeStyle 293F cells. For epitope mapping, conversion of specific residues of NiV F to alanine, serine, glutamic acid or aspartic acid was performed via site-directed mutagenesis using the Quick-Change II Sitedirected Mutagenesis Kit (Stratagene). The template for the reactions consisted of a C-terminal S-peptide tagged version of the codon optimized full-length NiV F (UMMC1 isolate) cloned in the pcDNA Hygro (+)-CMV+ vector. All mutationcontaining constructs were sequence verified.
The NiV F ectodomain construct used for cryo-EM experiments includes a human codon-optimized NiV F ectodomain trimer (amino acid residues 1-494) with a FLAG tag (DYKDDDK) introduced between residues L104-V105 and a C-terminal GCN4 motif (a kind gift from H. Aguilar-Carreno) . This construct was engineered by subcloning into a pBS SK(+) vector and introducing the previously described N100C/A119C substitutions 42 by site-directed mutagenesis using a QuikChange kit (Agilent) before subsequent subcloning into a pCAGGs vector for transient expression in FreeStyle 293F cells.
Cloning and sequencing of mAb 5B3 cDNA. The 5B3 cDNA was amplified from hybridomas using a SuperScript III Cells Direct cDNA Synthesis Kit (Invitrogen) with random hexamer and IgG2-specific primers 54 . PCR amplification of VH and VL was performed using the cDNA as a template and degenerate forward primers for the signal sequence or the conserved framework 1 (FR1) of the VH-and VLencoding sequences and reverse primers for the FR4 or the 3′ end of the constant heavy chain 1 (CH1)-and constant light chain (CL)-encoding sequences 54, 55 . The PCR products were cloned into pCR-Blunt II-TOPO vector (Invitrogen) and transformed into one Shot TOP10 chemically competent Escherichia coli (Invitrogen). Plasmids were extracted from colonies and the cloned PCR products were sequenced using M13 forward and reverse primers.
Humanization of 5B3 to generate h5B3 and h5B3.1. To engineer a humanized version of 5B3, a human scFv library was first generated based on FR1 and FR4 sequence similarity with 5B3. We adapted previously described methods and PCR primers employed for the generation of naive human scFv library constructed from peripheral blood B cells of several healthy donors 56 by using only the VH subfamily III and κ VL subfamily I primers. VH and VL were first amplified separately from the IgM cDNA library. For VH, we used forward and reverse primers probing the FR1 and FR4 of VH III with restriction site SfiI added to the 5′ end of the forward primer and (G 4 S) 3 linker sequence added to the 3′ end of the reverse primer. For VL, we used forward and reverse primers probing the FR1 and FR4 of VLκ I with (G 4 S) 3 linker sequence added to the 5′ end of the forward primer and restriction site SfiI added to the 3′ end of the reverse primer. The scFv library was assembled by overlapping PCR combining the VH and VL PCR products as template and using the VH III FR1 SfiI forward and VLκ I FR4 SfiI reverse primers. The amplified scFv was then cloned into a pCom3X vector harboring a C-terminal hexa-histidine tag. Colonies from the scFv library were grown and expressed as previously described 57 . We selected the 12 best expressing clones for DNA sequencing based on Coomassie blue staining and western blot analysis using an anti-histidine tag antibody. The translated human scFv FR sequences were then aligned against that of 5B3. For humanization of 5B3, conserved human residues from the alignment were identified and replaced into the homologous positions of 5B3 to generate h5B3. To further humanize h5B3, a version named h5B3.1 was generated from h5B3 where one residue on each of the CDR1 and CDR2 and two residues on CDR3 were mutated into conserved human residues based on the sequences from the human scFv library mentioned above.
scFv and IgG1 constructs. The scFv constructs were designed with VH and VL separated by a flexible linker (G 4 S) 3 , codon-optimized, synthesized by Genscript and cloned into a promoter-modified pcDNA Hygro (+)-CMV+vector 58 with a immunoglobulin κ chain leader sequence and a C-terminal S-peptide tag followed by a hexa-histidine tag.
For IgG constructs, VH and VL were cloned into a pDR12 vector that harbors the κ CL and IgG1 CH fragments as separate open reading frames with independent promoters 59 . Subsequently, the entire expression cassette of the heavy and light chain of pDR12-h5B3.1 was amplified and sub-cloned into pcDNA Hygro (+)-CMV+ vector for the development of stable cell lines.
Generation of scFv-and IgG1-expressing stable cell lines. HEK 293T cells grown in D-10 were transfected with different scFv or IgG1 constructs using Fugene transfection reagent (Roche Diagnostics). Cells were transfected with 2 µg DNA and 6 µl Fugene per well of a 60% confluent six-well tissue culture plate following the manufacturer's instructions. At 48 h post transfection, the culture medium was either replaced with selection medium (D-10 supplemented with 150 µg ml −1 of hygromycin B, Invitrogen) for stable cell line development or harvested for S-protein agarose (EMD Biosciences) or Ni-NTA agarose (QIAGEN) precipitation for transient expression evaluation. To generate a cell line for stable expression, hygromycin-resistant cells were then subjected to two rounds of limiting dilution cloning, as previously described 58 .
Large-scale expression and purification of IgG1. Production and purification of mouse IgGs (5B3, 12B2 and 13G5) from hybridomas was carried out as previously described 17 .
Transient expression of h5B3.1 IgG1 was carried out by transfecting FreeStyle 293F suspension cells in serum-free FreeStyle 293 expression medium (Invitrogen) in shaker flasks at a density of 1 × 10 6 cells ml −1 using 293fectin transfection reagent (Invitrogen) following the manufacturer's protocol. Production of h5B3.1 IgG1 from a stable cell line was carried out by culturing the FreeStyle 293F cells expressing h5B3.1 IgG1 in 70 ml of FreeStyle 293 expression medium in 500 ml shaker flasks at a density of 1 × 10 6 cells ml −1 . The transfected cells or stable cells were allowed to grow for an additional 3-4 days with 50 ml of culture medium added for every subsequent day.
Culture supernatants expressing IgG were collected and centrifuged at 4 °C for 15 min at 5,000g. The supernatant was then filtered through a 0.2 µm low protein binding membrane (Corning) and passed through a HiTrap Protein G HP column (GE Healthcare Biosciences) equilibrated in phosphate-buffered saline (Quality Biologicals). The column was washed with five column volumes of phosphatebuffered saline. The bound mAb was eluted with 0.1 M glycine pH 2 followed by immediate neutralization with 1 M Tris pH 8.0, concentrated, and bufferexchanged into phosphate-buffered saline using an Amicon Ultra centrifugal concentrator (Millipore).
Generation of Fab fragments from IgG. The 5B3 Fab was obtained by fragmentation of mouse 5B3 IgG using Pierce mouse IgG1 Fab and F(ab′)2 preparation kits according to the manufacturer's protocol.
The h5B3.1 Fab fragment was obtained by fragmentation of h5B3.1 IgG with Lys-C protease (EMD Millipore) and affinity purification using protein A agarose resin (Genscript). Briefly, 1.0 mg IgG was incubated with 0.5 µg Lys-C for 7 h at 37 °C. The reaction was quenched by addition of PMSF to 1 mM final concentration and the undigested and Fc-containing portion of the sample was removed using a protein A resin. The Fab-containing flow-through from the protein A affinity step was collected.
The Fab-containing fraction was concentrated and further purified using a Superdex 75 10/300 gel filtration column equilibrated in a buffer containing 50 mM Tris pH 8.0 and 150 mM NaCl.
NiV F and HeV F ectodomain production. Soluble NiV F and HeV F were produced by transient transfection of FreeStyle 293F cells at a density of 1 × 10 6 cells ml −1 with the corresponding plasmid using 293-Free transfection reagent (Millipore) and Opti-MEM (Thermo-Fisher) according to the manufacturer's protocol. After five days in a humidified shaking incubator, maintained at 37 °C and 8% CO 2 , the cell supernatant was harvested and clarified of cell debris by centrifugation. Subsequent affinity purification was carried out using an anti-FLAG resin (Genscript) and elution with 1 mg ml −1 FLAG peptide dissolved in Tris buffer pH 8.0, 150 mM NaCl or with S-protein agarose (Millipore Sigma, Novagen) and elution with 0.2 M citric acid pH 2.0 followed by immediate neutralization with 1.0 M Tris pH 9.5. The eluted fraction was buffer-exchanged into 50 mM Tris buffer pH 8.0, 150 mM NaCl using a 30 kDa cutoff centrifugal concentrator (Millipore).
Biolayer interferometry. Assays were performed with an Octet Red 96 instrument (ForteBio) at 30 °C while shaking at 1,000 r.p.m. All measurements were corrected by subtracting the background signal obtained from biosensors without immobilized HeV F or NiV F. S-peptide tagged HeV F or NiV F in phosphate buffered saline at pH 7.4 was diluted to 14 µg ml −1 in 10 mM acetate buffer pH 5.0 before immobilization on N-hydroxysuccinimide-(1-ethyl-3-(3dimethylaminopropyl)carbodiimide hydrochloride (NHS-EDC). NHS-EDCactivated Amine Reactive 2nd Generation (AR2G, ForteBio) biosensors for 300 s. The sensors were then quenched in 1 M ethanolamine (ForteBio) for 300 s and incubated in kinetics buffer (KB: 1× PBS, 0.01% BSA, 0.02% Tween 20 and 0.005% NaN 3 (ForteBio)) for 300 s to establish the baseline signal (nm shift). HeV F-or NiV F-loaded sensors were then immersed in solutions of purified Fab (5B3 or h5B3.1) diluted in KB to the desired concentrations for kinetics analysis (300-1.23 nM for 5B3 Fab and 900-3.7 nM for h5B3.1 Fab). Curve fitting was performed using a 1:1 binding model to determine the binding kinetics with ForteBio data analysis software. Mean k on and k off values were determined with a global fit applied to all data. Experiments were performed twice with independent NiV F and HeV F protein preparations, yielding identical results and kinetic parameters.
Crystallization, data collection and processing of the 5B3 Fab. Crystals were grown in hanging drops set up with a mosquito at 20 °C using 150 nl protein solution and 150 nl mother liquor containing 0.2 M magnesium chloride, 0.1 M Tris-HCl pH 8.5 and 20% PEG 8000. The diffraction dataset was collected at ALS beamline 5.0.1 and processed to 1.5 Å resolution using XDS 60 and Aimless 61 . The structure was solved by molecular replacement using Phaser 62 and S230 SARS-CoV Fab 63 as search model. The coordinates were subsequently improved and completed using Buccaneer 64 and COOT 65 and refined with BUSTER-TNT 66 and REFMAC5 67 . The quality of the final model was analyzed using MolProbity 68 (score 1.13) and Clashscore (3.35) . The percentage of poor rotamers was 0.92%; Ramachandran statistics were 98.84% favored and 100% allowed. Other crystallographic data collection and refinement statistics are summarized in Table 2 .
Purification of the NiV F-5B3 complex. Purified FLAG-tagged NiV F N100C/ A119C ectodomain was combined with an excess molar ratio of 5B3 Fab and incubated on ice for 1 h before injection on a Superose 6 Increase 10/300 column (GE Healthcare) equilibrated in a buffer containing 50 mM Tris pH 8.0 and 150 mM NaCl. The fractions containing the complex were quality-controlled by negative staining EM, pooled, buffer-exchanged and concentrated.
Cryo-electron microscopy specimen preparation and data collection. A 3 µl volume of the purified FLAG-tagged NiV F N100C/A119C 5B3 Fab complex at a concentration of 0.1 mg ml −1 was applied onto glow-discharged C-flat (Cu 200 mesh, CF-1.2/1.3, Protochips) holey carbon grids covered with a thin layer of continuous home-made carbon and incubated for 30 s on grids. Grids were then plunge-frozen in liquid ethane and cooled with liquid nitrogen, using an FEI MK4 Vitrobot with a 3.0 s blot time. The chamber was kept at 20 °C and 100% humidity during the blotting process.
Data acquisition was carried out with the Leginon data collection software 69 on an FEI Titan Krios electron microscope operated at 300 kV and equipped with a Gatan BioQuantum energy filter (slit width of 20 eV) and a Gatan K2 Summit camera. The nominal magnification was 105,000× and the pixel size was 1.37 Å. The dose rate was adjusted to 8 counts per pixel per second and each video was acquired in counting mode fractionated in 50 frames of 200 ms each. A total of 2,686 micrographs were collected with a defocus range between 1.5 and 2.5 µm.
Cryo-electron microscopy data processing. Video frame alignment was carried out with MotionCor2 70 . Particles were automatically selected using DoG Picker 71 within the Appion interface 72 . Initial defocus parameters were estimated with GCTF 73 . A total of 380,459 particles were picked, extracted and processed with a box size of 256 pixel 2 and preprocessed using Relion 3.0 74 . Reference-free twodimensional (2D) classification with cryoSPARC was used to select a subset of particles, which were used to generate an initial model using the Ab-Initio reconstruction function in cryoSPARC 75 . This 3D map was subsequently used as a reference for running 3D classification with C3 symmetry in Relion on the entire dataset. 262,879 particles were selected from the set of all picked particles for 3D refinement using Relion. CTF refinement in Relion 3.0 was used to refine per-particle defocus values and particle images were subjected to the Bayesian polishing procedure in Relion 3.0 76 and 3D refinement before performing another round of CTF refinement and 3D refinement. The particles were subsequently subjected to another round of 3D classification in Relion 3.0 without refining angles and shifts. 38,756 particles from the best class (showing a resolved stem) were used for non-uniform refinement in CryoSPARC to obtain the final 3D reconstruction at 3.5 Å resolution. Reported resolutions are based on the goldstandard FSC = 0.143 criterion 77, 78 and Fourier shell correlation curves were corrected for the effects of soft masking by high-resolution noise substitution 79 . Local resolution estimation and filtering was carried out using cryoSPARC. Data collection and processing parameters are listed in Table 1 .
Model building and analysis. UCSF Chimera 80 was used to rigid-body fit the crystal structures of the NiV F ectodomain 13 and of the 5B3 Fab crystal structure into the cryo-EM density. The model was subsequently rebuilt manually using Coot 81 and refined using Rosetta [82] [83] [84] . Glycan refinement relied on a dedicated Rosetta protocol, which uses physically realistic geometries based on prior knowledge of saccharide chemical properties 85 and was aided by using both sharpened and unsharpened maps. Models were analyzed using MolProbity 68 Immunoprecipitation. To evaluate the binding of NiV F mutants with different antibodies, sub-confluent HEK 293T cells were transfected with untagged full-length wild type or one of the mutant NiV F constructs using the Fugene transfection reagent, as described above. Cells were harvested at 48 h post transfection and were lysed in 500 µl buffer containing 0.1 M Tris pH 8.0, 0.1 M NaCl supplemented with complete protease inhibitor cocktail (Roche) and clarified by centrifugation. Clarified lysates were added to 2 µg of IgG followed by 50 µl of 20% slurry protein G sepharose for samples incubated with IgGs or 30 µl of 50% slurry S-protein agarose for those that were not.
To evaluate h5B3 chain binding to F, 300 µl of clarified untagged full-length F-expressing HEK 293T cell lysate was added to the h5B3 scFv-expressing culture supernatants and precipitated with 30 µl of 50% slurry of S-protein agarose.
In all cases, immunoprecipitation/pulldown were performed overnight at 4 °C. The samples were washed three times with a buffer containing 1% Triton X-100, 0.1 M Tris pH 8.0, 0.1 M NaCl and subsequently boiled in reducing SDSpolyacrylamide gel electrophoresis (PAGE) sample buffer followed by SDS-PAGE and western blot analyses.
HRB peptide triggering assay. The capture assay was performed as previously described 17 with the addition of a competition step in the presence of increasing amounts of IgGs. Briefly, 1 µg of purified S-peptide tagged NiV F ectodomain trimer was cleaved with 10 ng of trypsin (New England Biolabs) in a 10 μl reaction volume of buffer at 4 °C overnight to generate the mature F1 and F2 subunits. The reaction was stopped with 1 μl of 10× complete protease inhibitor cocktail (Roche). Subsequently, 2 µg of biotinylated NiV F HRB peptide was added in the presence or absence of competing IgG. The sample was heated for 15 min at 50 °C, 60 °C, 65 °C or 70 °C and the NiV F/HRB complex was subsequently pulled down using 30 µl of 50% avidin-agarose slurry for 1 h at 4 °C (Thermo Fisher Scientific). When indicated, the unbound fraction was pulled down with protein G sepharose. Samples were washed three times with a buffer containing 1% Triton X-100, 0.1 M Tris pH 8.0, 0.1 M NaCl and boiled in 50 µl of reducing SDS-PAGE sample buffer. To analyze the precipitated product, a 25 µl sample was applied to a 4-12% BT SDS-PAGE (Invitrogen) followed by western blotting and detection using a rabbit anti-NiV F polyclonal antibody.
Cell-cell fusion assays. Fusion between NiV F and G glycoprotein-expressing effector cells and permissive target cells was measured using a previously described β-galactosidase assay 89 . Briefly, plasmids encoding S-peptide tagged wild-type NiV F or each mutant of F and NiV G or no DNA (control/mock transfection) were transfected into HeLa-USU effector cells using lipofectamine LTX with Plus reagent (Thermo-Fischer Scientific). The following day, transfected cells were infected with vaccinia virus-encoding T7 RNA polymerase. HeLa-ATCC cells served as receptor-positive target cells and were also infected with the E. coli Lac Z-encoding reporter vaccinia virus. Cells were infected at a multiplicity of infection of 10 and incubated at 37 °C overnight. Cell fusion reactions were conducted by incubating the target and effector cell mixtures at a ratio of 1:1 (2 × 10 5 total cells per well; 0.2 ml total volume) in 96-well plates at 37 °C. Cytosine arabinoside (40 µg ml −1 , Sigma-Aldrich) was added to the fusion reaction mixture to reduce non-specific β-galactosidase production. Nonidet P40 (EMD Millipore Sigma) was added (0.5% final concentration) at 2.5 or 3.0 h, and aliquots of the lysates were assayed for β-galactosidase at ambient temperature with the substrate chlorophenol red-d-galactopyranoside (Roche). Assays were performed in triplicate, and fusion results were calculated and expressed as rates of β-Gal activity (change in optical density at 570 nm min −1 × 1,000) in a VersaMAX microplate reader (Molecular Devices). Equal amounts of leftover F/G-expressing effector cells from each fusion reaction were lysed and clarified by centrifugation. The lysates were then subjected to S-protein agarose precipitation followed by SDS-PAGE and western blotting to evaluate the expression level of each F mutant as compared to wild type. The individual cell fusion reactions mediated by each mutant were converted to percentages of wild-type fusion activity and normalized with the total expression of F and each F mutant as measured by densitometry from the images of western blot bands using ImageQuantTL software (GE Healthcare Biosciences). Normalization of each F mutant percentage of wild-type fusion was calculated with the formula: normalized percentage of wild-type fusion = (100/percentage of wild-type expression) × percentage of wild-type fusion.
NiV and HeV F mAb neutralization assays. The virus infectivity neutralization concentrations of a control antibody D10 IgG2a anti-HIV gp41 90 , 5B3 IgG anti-F and h5B3.1 IgG1 anti-F were determined for NiV and HeV using a plaque reduction assay. Briefly, antibodies were serially diluted fivefold from 150 μg ml −1 to 1.9 ng ml −1 and incubated with a target of ~100 p.f.u. (plaque-forming units) of NiV-M, NiV-B or HeV for 45 min at 37 °C. Virus and antibody mixtures were then added to individual wells of six-well plates of VeroE6 cells. Plates were stained with neutral red two days after infection and plaques were counted 24 h after staining. Neutralization potency was calculated based on p.f.u. for each virus in the well without antibody. The experiments were performed in triplicate with independent virus preparations and duplicate readings for each replicate. Mean half-maximal inhibitory concentrations were calculated as previously described 91 .
Escape mutant analysis. Neutralization-resistant NiV mutants were generated by incubating 1 × 10 5 50% tissue culture infective dose (TCID 50 ) of each virus with a sub-neutralizing concentration of 40 μg of 5B3 IgG, in 100 μl medium for 1 h at 37 °C and then inoculated onto 1 × 10 6 VeroE6 cells in the presence of IgG at the same concentration. The development of cytopathic effects was monitored over 72 h and progeny viruses were harvested. IgG treatment was repeated two additional times, with cytopathic effects developing slowly with each passage. Viruses from the third passage were plaque purified in the presence of IgG and neutralization resistant viruses were isolated. The experiment was performed in duplicate and the F and G glycoprotein genes of five individual plaques were sequenced. The neutralization titers between wild type and the neutralizationresistant virus were determined using a micro-neutralization assay. Briefly, the 5B3 IgG was serially diluted two fold and incubated with 100 TCID 50 of the wild-type and neutralization-resistant NiV for 1 h at 37 °C. Virus and antibodies were then added to a 96-well plate with 2 × 10 4 VeroE6 cells per well in four wells per antibody dilution. Wells were checked for cytopathic effects three days post infection and the mean half-maximal inhibitory concentrations was determined as the mAb concentration at which at least 50% of wells showed no cytopathic effects.
Reporting Summary. Further information on experimental design is available in the Nature Research Reporting Summary linked to this article.
The sharpened and unsharpened cryo-EM maps and atomic model have been deposited in the EMDB and wwPDB with accession codes EMD-20584 and 6TYS, respectively. The 5B3 Fab crystal structure has been deposited in the wwPDB with accession code 6U1T. Uncropped images for Fig. 4d ,e and Extended Data 4 are available online. Fig. 1 | Cryo-EM characterization of the NiV F glycoprotein in complex with the neutralizing antibody 5B3 Fab fragment. a, Representative micrograph. Scale bar, 100 nm. b, Reference-free 2D class averages. Scale bar, 100 Å. c, Gold-standard (black) and map/model (red) Fourier shell correlation curves. Dotted lines indicate 0.143 and 0.5 thresholds. d, Two orthogonal views of the cryo-EM reconstruction colored by local resolution computed using cryoSPARC. e, Enlarged view of the model with the cryo-EM reconstruction rendered as a blue mesh. Fig. 3 | 5B3 binding is associated with a local structural reorganization of the HRA β-hairpin. a, Ribbon diagrams of the superimposed 5B3-bound and apo NiV F trimers. The 5B3 Fab fragments are omitted for clarity. The cyan square highlights the region of the structure shown in b-e. b,c, Enlarged views showing the HRA conformational change. d,e, Enlarged views rotated 45° relative to b and c. In all panels, 5B3-bound and apo-NiV F trimers are rendered grey and orange, respectively. In c-e, one 5B3 Fab fragment is shown with its heavy and light chains colored purple and pink, respectively, whereas the cyan star indicates clashes that would occur between 5B3 and the HRA β-hairpin conformation observed in the apo-NiV structure 13 (PDB 5EVM).
|
of a plethora of varied nano-drug delivery systems, not only by the academic institutions but also by industrial organizations. This led to the availability of a huge database comprising several research papers and patents from all over world, describing these new dosage forms. Numerous funding agencies and industries actively promoted research into nanoparticulate drug delivery vehicles and huge investments were made to this end. All these diverse and concurrent efforts created an awareness about the immense potential of these new drug delivery systems, which were then looked upon as therapeutic regimens of the future.
There are many reasons behind the development and success of nanoparticulate drug delivery systems. A few years ago, the entire attention of pharmaceutical industry was focused on the novel developments in designing various dosage forms, primarily due to expiry of the existing patents, a surfeit of poorly soluble drug candidates and the problems of non-specifi city from conventional dosage forms. Under these circumstances, the development of nanoparticulate drug delivery systems gained huge momentum due to a number of diverse factors listed in the following section of this chapter. ■ The pharmaceutical industries were poised to provide quality products to the patient, at the same time increasing or maintaining their profi tability. However, this process demanded extensive scientifi c innovation and fi nancial support. Development of new chemical entities and their transition from the laboratory to market required the company to expend as high as 800 million US dollars [1] . Apart from a huge investment, the development of the new drug was also an extensively time consuming process with very limited success rates.
■ Research progress in the drug discovery area resulted in the development of various poorly soluble drug candidates. The solubility limitations of these drug candidates, in turn, lead to poor bioavailability and lower therapeutic effi cacy [2, 3] . In such situations, formulation of these therapeutic molecules into nanoparticulate delivery systems was observed to improve their bioavailability and hence elicit the desired therapeutic effects from these candidates. The nanoparticles also received a prominence due to other probable benefi ts like biodegradability, biocompatibility, high encapsulation characteristics and probability of surface functionalization [1] [2] [3] .
■ Nanoparticles were found to exhibit several advantages for parenteral drug delivery; counter to the aggregation phenomenon commonly observed with microparticles, the smaller size of nanoparticles endowed them with better distribution profi les during systemic ■ Owing to their small size, nanoparticles were found to effectively traverse many biological barriers. Of signifi cant importance is their ability to permeate the blood brain barrier (BBB). Although brain administration is an effective route for the treatment of various brain diseases, it is severely limited due to the highly impermeable nature of the BBB. Because of their potential to cross this barrier, numerous publications have demonstrated the effectiveness of nanoparticles for targeting various central nervous system disorders [6] . Nanospectra Bioscience, Texas, USA, has recently initiated a clinical trial of nanoparticle based 'nanoshells' for the treatment of brain tumors [7] .
■ The size of nanoparticles offers distinct advantages when compared with conventional dosage forms. The tunable size of these systems greatly infl uences the release profi le of the encapsulated active component, so a formulator could thus control the drug release at the site of action [8, 9] . ■ Nanoparticles were found to be highly versatile systems to encapsulate and delivery not only chemical drug moieties but also nuclic acid therapeutics (DNA, siRNA), and imaging and diagnostic agents, for site-specifi c delivery and detection. Various ligands can be attached to the surface of nanoparticles to guide them to specifi c locations within the body [9, 10] .
Thus nanoparticulate drug carriers found applications in several diverse quarters of drug delivery research and, due to their tunable properties, they were foreseen as the future of the pharmaceutical and biotechnology industry.
Drug transport through a biological barrier largely depends upon its solubility. Solubility exhibits an important infl uence on drug permeation and absorption. The solubility of drugs has been a concern for formulation scientists because of the diffi culties in developing oral and parenteral delivery systems for poorly soluble drugs. Though the pharmaceutical sector is witnessing vast advances in drug discovery and its therapeutic horizon is expanding, it is the existing drug molecules and novel drug candidates that pose major solubility problems. Various reports indicate that approximately 40% of the drugs which are currently in the market have poor water solubility [11] . About one-third of the drug candidates in the pharmacopoeia exhibit the same solubility limitations. Low water solubility in turn hampers adequate absorption and hence leads to low therapeutic effi cacy [12] . The solubility constraint of drugs and novel drug candidates is thus one of the major obstacles to the development of therapeutically effective drug delivery systems. Nanoparticles made from natural/synthetic polymers, lipids, proteins and phospholipids have received greater attention due to higher stability and the opportunity for further surface modifi cations [13] . They can be tailored to achieve both controlled drug release and disease-specifi c localization, either by tuning the material characteristics or by altering the surface chemistry [14] . It has been established that nanocarriers can become concentrated preferentially in the tumor mass, sites of infl ammation, and at sites of infection by virtue of the enhanced permeability and retention (EPR) effect of the vasculature. Once accumulated at the target site, hydrophobic biodegradable polymeric nanoparticles can act as a local drug depot, depending upon the make-up of the carrier, thus providing a reservoir for continuous supply of encapsulated therapeutic compound at the disease site, such as, a solid tumor. These systems, in general, can be used to provide targeted (cellular/tissue) delivery of drugs, to improve oral bioavailability, to sustain drug/gene effects in the target tissue, to solubilize drugs for intravascular delivery, or to improve the stability of therapeutic agents against enzymatic degradation (nucleases and proteases), this being especially relevant for protein, peptide, and nucleotide based agents [13, 15, 16] . Thus, the advantages of using nanoparticles for drug delivery result from two main basic properties: their small size and the use of biodegradable materials.
Many studies have demonstrated that nanoparticles of sub-micron size have a number of advantages over conventional dosage forms as drug delivery carriers [17] . A further advantage over conventional drug delivery systems is their better suitability for intravenous (i.v.) delivery. The smallest capillaries in the body are 5-6 μ m in diameter. Therefore the size of particles being distributed into the bloodstream must be signifi cantly smaller than 5 μ m, without forming aggregates, to ensure that the particles do not lead to an embolism. Additionally, some types of cells permit the uptake of only sub-micron particles and not their larger counterparts. Generally nanoparticles have relatively higher intracellular uptake compared to microparticles and are available to a much wider range of biological targets due to their small size and relative mobility. Desai et al. found that 100 nm nanoparticles had a 2.5 fold greater uptake than 1 μ m microparticles, and 6 fold greater uptake than 10 μ m microparticles in a CaCo -2 cell line [16] . Secondly, the use of biodegradable materials for nanoparticle preparation allows sustained drug release within the target site over a period of days or even weeks.
With regards to the material of formulation, biodegradable nanoparticles formulated from polymers and lipids have been developed for intracellular sustained drug delivery, especially for drugs with an intracellular target [13, 16] . Rapid escape of these nanoparticles from the endo-lysosomal compartment to the cytoplasmic compartment has been demonstrated [13, 17] . Additionally, they were demonstrated to effectively sustain the intracellular drug levels, thus allowing a more effi cient interaction with the cytoplasmic receptors. Thus, nanoparticles could serve as effective delivery vehicles for drugs with cytoplasmic targets.
To summarize, nanoparticles have proven advantages over the conventional dosage forms. They offer a reliable alternative to the pharmaceutical industry to improve the therapeutic effects of existing drugs, elicit better effect from new chemical entities and to deliver sensitive molecules like proteins, peptides, DNA and RNA.
Nanoemulsions are optically isotropic and thermodynamically stable systems of two immiscible liquids -typically water, oil and surfactant(s) -in which one liquid is dispersed as droplets in another liquid. Emulsions with nanoscopic droplet sizes (typically in the range of 20-200 nm) are often referred to as nanoemulsions [18] . Nanoemulsions offer enhanced solubilization capacity for poorly soluble drugs, increased drug loading, and in turn lead to a higher bioavailability of the formulated therapeutic moiety. Various GRAS (generally regarded as safe) approved, saturated and unsaturated fatty acids and nonionic surfactants are commonly used for formulating the nanoemulsions. The formulation of nanoemulsions can either involve appropriate energy inputs (ultrasonication, high pressure homogenization, microfl uidization) or may be a spontaneous process. There are a few commercial products like Estrasorb ® and Flexogan ® which are based on nanoemulsion technology [19, 20] . Estrasorb ® is estradiol topical emulsion developed by Novavax and is recommended for the reduction of vasomotor symptoms in menopausal women. Estrasorb ® is composed of soyabean oil, water, polysorbate 80 and water and is based on micellar nanoparticle technology. This technology proves that poorly water soluble molecules like estradiol can be successfully formulated and commercialized using nanoparticulate delivery vehicles [20, 21].
Yet another example of a nanoemulsion based product is that of Flexogan ® developed by AlphaRx, Canada, which is a pain relief cream based on colloidal dispersion of nanoparticles. The oil droplets contain natural pain medicaments like menthol and camphor and as nanoparticles permeating faster through the skin, thereby providing a rapid relief. Once again in this case the nanoparticulate delivery is responsible for a higher bioavailability and quicker onset of action of the encapsulated actives [22].
Bioavailability of poorly water soluble drugs is frequently related to the particle size of the drug molecule. Particle size reduction of these drug molecules improves the overall surface area, dissolution properties and thus leads to a higher bioavailability of the drug. Formulating drug nanocrystals is one of the most successful strategies to improve the drug solubility and bioavailability. Numerous methods may be employed to generate the drug nanocrystals which include high pressure homogenization, media milling and nanoprecipitation [12, 23] .
High-pressure homogenization involves passage of the coarse drug suspension along with a suitable stabilizer through the tiny homogenizer gap at a very high pressure (up to 2000 bar) [23] . This process can be performed in presence of water or non-aqueous media. The non-aqueous medium is specifi cally useful when there are chances of drug degradation because of aqueous media. The pressure is responsible for the generation of nanoparticles due to various principles, such as cavitation, disintegration and shearing, and based upon these, three important technologies have been designed for nanocrystal production using highpressure homogenization. The technologies are Piston gap homogenization in water (Dissocubes ® technology), Microfl uidizer technology (IDD-P™ technology) and Nanopure ® technology [12, 23] .
Triglide ® is the fi rst clinically approved nanocrystal product developed using high-pressure homogenization and is indicated for the treatment of hypercholesterolemia. Triglide ® has been produced using the IDD-P™ technology by Skyepharma and has been marketed by Sciele Pharma Inc.
[23, 24]. The drug candidate in this product is fenofi brate, a lipophilic compound which is practically insoluble in water. The bioavailability of fenofi brate is severely limited due to its poor water solubility although clinical observations have revealed a higher bioavailability in fed-state patients. This has been attributed to the lipids and associated compounds in the meal which enhance its solubility and hence absorption [25] . In another study, micronization of fenofi brate was found to enhance its dissolution and hence its oral bioavailability. This study also confi rmed the reason for nanocrystals of fenofi brate, with a still lower particle size, to further improve its bioavailability, when compared with the micronized form [26] .
Of all the homogenization methods, media milling has been the most successful in producing drug nanocrystals, primarily due to its simplicity and scalability. Additionally, media milling is also economical compared to the other methodologies. Here, the nanoparticle generation depends upon the shearing forces and impact between the moving beds of milling material and the mixture of drug and stabilizer. Despite a few limitations of this method like drug loss and adhesion, it is still preferred by the pharmaceutical industry for generating drug nanoparticles [11, 27] .
Rapamune ® is one of the fi rst nanocrystal products developed using media milling technology by Elan Drug Technologies. Rapamune ® consists of sirolimus, a poorly water soluble compound, indicated for prophylaxis of organ rejection in patients receiving renal transplants. Elan termed this technology as NanoCrystal ® technology and it involves the particle size reduction of drug to formulate drug nanoparticles [11, 23] . The particle size reduction leads to a higher dissolution, higher absorption and hence enhanced bioavailability. Rapamune ® was found to overcome the problems associated with rapamycin, the conventional dosage form of sirolimus. It demonstrated 27% more bioavailability, improved patient compliance and better shelf-life when compared with conventional product. Additionally, the product also met with tremendous economic success [28] [29] [30] .
Emend ® is another successful product of Merck and developed by Elan Drug Technologies. Aprepitant, the active ingredient of Emend ® , has poor aqueous solubility (3-7 μ g/ml) with moderate permeability (CaCo -2 permeability reported at 7.85 × 10 −6 cm/s). With the conventional formulation of aprepitant, it was observed that food plays a signifi cant role on the rate and amount of drug absorbed. Emend ® , on the other hand, eliminated this requirement and the drug absorption and bioavailability was enhanced by 600%, as compared to the conventional product. This product was also recently approved in Japan, and is now available in US, Europe and Japan 31, 32] .
Various other products like Tricor ® (Fenofi brate), Megace ES ® (Megestrol), Invega ® , Sustenna ® /Xeplion ® (Paliperidone palmitate) are also produced as nanoparticles, by media milling, to overcome the solubility issues. The details of all such products have been listed in Table 1 .1 .
The above two methods describe the 'top-down' approach of particle size reduction, aimed to enhance the solubility and bioavailability of drug candidates. These methods are relatively simple and economical; however, the processes possess complexity for formulating sensitive molecules with respect to size and surface control.
In such cases, the 'bottom-up' approach provides an alternative challenging method for generating drug nanoparticles. The approach is specifi cally useful when there is a need to load a nanoparticle with many active ingredients or the surface of nanoparticle needs to be functionally altered [11, 33, 34] .
Based on this approach, the nanoprecipitation method involves the generation of drug nanoparticles by nucleation and growth of the drug crystals. The nucleation process is triggered by dissolving excess of drug in a suitable solvent, resulting in a super-saturated solution. The supersaturated mixture is then normally added to an antisolventstabilized mixture to produce nanocrystals. Various parameters in this process control the ultimate size and surface characteristics. Nanocrystals produced by nanoprecipitation method are presently in preclinical stage and to date these systems have not yet met with clinical or industrial success [11, [33] [34] [35] .
Though the pharmaceutical industry is witnessing enormous developments in the fi eld of nanoparticulate drug delivery systems, it is still faced with numerous scientifi c queries and challenges. Limited solubility and poor bioavailability allowed the pharmaceutical industry to look beyond conventional drug delivery systems and that was the primary reason for the nano-drug delivery systems to occupy a signifi cant position in formulation research and development. The development of these technologies is limited to few pharmaceutical industries and continents. Wide-ranging acceptance of the nanoparticle based drug delivery systems is still far from realization. The developed nanoparticulate drug delivery systems need to be investigated for their own unique pharmacological actions and safety profi les. The drug release profi les, in vitro-in vivo (IVIVC) correlations, pharmacokinetic and pharmacodyanamic profi les need to be thoroughly established before they can be routinely adopted for clinical applications.
One of the major drawbacks associated with conventional drug delivery systems is their non-specifi c nature. The conventional dosage forms are generally formulated using an excess of drug as compared to its actual dose because these formulations deliver only a small fraction of this drug to the affected area, while a major part is distributed throughout the body. This random distribution leads to unwanted side effects and toxicities [36] . Nanoparticles are able to provide site specifi city and can be targeted to a specifi c tissue or organ of the body to give the desired therapeutic effect with minimal side effects. The targeting capacity of nanoparticles is one of the most important reasons behind their success. Targeted drug delivery has been proven highly benefi cial for the treatment of cancer, since the majority of the anticancer drugs are normally known to harm non-cancerous body cells. [37] . Drug targeting through nanoparticles is usually achieved by their surface functionalization with various ligands, which can identify and bind to certain cellular receptors of the body [38] .
Targeted drug delivery is a broad terminology and has been investigated for decades, specifi cally for cancer. This research has also been extended to other organs susceptible for site-specifi c diseases and the following sections of this chapter have been dedicated to these numerous facets of targeted drug delivery. Targeted drug delivery is normally achieved either via active targeting or passive targeting [39] .
Active targeting involves direct routing of the drug and/or the delivery system to the diseased biological site to minimize the side effects. RNA interference (RNAi) therapy and monoclonal antibodies are well known examples of this approach. The RNAi approach is a rapidly expanding fi eld and has particularly gained a momentum due to huge investments by the pharmaceutical industry in developing RNAi based therapeutics. This therapy has potential for the treatment of various fatal diseases like cancer, viral infections and various genetic diseases. The details of RNAi therapy are described in Chapter 2 of this book. On other hand, monoclonal antibody-based therapy has been investigated for various diseases, such as cardiovascular disorders, infl ammatory disorders and cancer. The antibodies bind only to the specifi c cells which they are intended to target and stimulate an immunological response against these targeted cells [40, 41] .
Passive targeting may be achieved employing drug delivery systems administered via different delivery routes, where the delivery system enhances the effect of the drug utilizing the functions of the targeted site. For example, drug delivery is enhanced through the more permeable cancer tissues as compared to their normal counterparts, due to leaky vasculature and enhanced permeation retention (EPR) effect of the former. Similarly, delivery through nasal, transdermal and intra-uterine routes are also utilized in passive targeting [40, 42] .
Targeted delivery for cancer is one of the most-investigated areas, with the majority of commercial products belonging to this category. Conventional cancer therapy, i.e. chemotherapy and radiotherapy, are effective in the early stages of cancer detection; however, adverse effects arising from these therapies pose a major challenge [43] . Nanoparticlebased targeted drug delivery provides a huge potential for cancer therapy, primarily due to the favorable dimensions and surface functionalities of these carrier systems. These systems can enter tumor cells and interact with cellular receptors, and are thus able to inhibit the growth and spread of the cancer. Targeted nanocarriers for cancer facilitate precise cellular and molecular alterations and are hence more effective and less harmful to the normal cells than conventional treatments, including chemotherapy and radiotherapy [44] . Table 1 .2 cites some of the nanoparticulate systems, along with their respective oncological indications, which have been developed by various industries.
The majority of nanoparticulate systems for targeted drug delivery to cancers are surface functionalized with appropriate ligands. These ligands facilitate cellular binding, internalization and specifi c therapeutic effects. Optimization of the process of attachment of the ligand to the nanoparticulate surface is one of the chief factors that govern the precise targeting of nanoparticles. Another important factor includes exclusivity of the targeted ligand to the cancer cells with negligible occurrence on the other healthy cells of the body [43] [44] [45] . Some of the common targeting strategies have been listed in Table 1 .3 .
The past decade has witnessed a sharp rise in the number of targeted drug nanoparticles that have been approved for cancer therapy. Liposomal nanoparticles were the fi rst in this category and were successful in alleviating the adverse effects associated with drugs in conventional dosage forms [43] . An excellent example of this is liposomal doxorubicin, which resulted in a signifi cant reduction in the severe cardiotoxicity of the native drug. Doxil ® (pegylated liposomal doxorubicin) was the fi rst nanoparticulate product approved for targeted cancer therapy. Pegylated liposomes demonstrated improved pharmacokinetic and pharacodynamic profi les. Polyethyeleneglycol (PEG) coating thus constituted one of the fi rst, commercial targeting strategies [43, 63] . It also afforded additional stability to the nanoparticles and prevented their elimination by reticuloendothilial system (RES). Tagging of PEG on nanoparticle surface, also known as 'steric stabilization' or 'stealth effect', was also reported to improve their blood circulation time and uptake by macrophages [43, 64] . Abraxane ® is another commercially successful product for the targeted cancer therapy. Abraxane ® (paclitaxel protein-bound particles for injectable suspension) is an albumin-bound form of paclitaxel with a mean particle size of approximately 130 nm. Paclitaxel, indicated for Targeted cancer therapy using nanoparticles breast cancer therapy, is a poorly soluble molecule. There are many formulation challenges associated with paclitaxel like inherent solubility problems and toxicity issues. Moreover, the existing formulation (Taxol ® ) of this drug exhibits severe adverse effect attributable to its excipient, Cremophor EL. When compared with Taxol ® , Abraxane ® provided an improved tumor cell penetration of paclitaxel and decreased the occurrence of Cremophor related adverse effects [43, 65, 66] . These benefi ts of Abraxane ® are due to the natural property of its key ingredient, albumin, to transport the lipophilic molecules through noncovalent binding. Albumin, a predominant plasma protein, binds to the glycoprotein receptor gp60 and maintains the transendothelial oncotic pressure gradient by regulating the transport of bound/unbound plasma components such as fatty acids, steroids, thyroxine, and amino acids. The receptor subsequently binds to caveolin-1 with successive formation of caveolae, a key determinant of transcellular endothelial permeability. Albumin is accumulated in breast, lung, head, neck and prostate cancer by binding with the secreted protein acid rich in cystein (SPARC). SPARCalbumin interaction supports the accumulation of albumin in tumor and increases the effectiveness of albumin-bound paclitaxel (nab-paclitaxel). The success of nab nanoparticles has paved a way for their use in different types of cancer, and a large number of clinical trials, utilizing these nanoparticles, are presently in progress [43, [67] [68] [69] [70] ].
The drug development for the treatment of central nervous system (CNS) disorders has been increasing; however, the preclinical and clinical successes are still far from reality. The global market for drugs for central nervous system (CNS) diseases is limited due to various reasons like high cost of drug development, a very high risk to benefi t ratio, a limited understanding of these diseases and the formulation intricacies for brain delivery [71] . Other diffi culties in effi ciently treating CNS disorders include the limited number of available drugs and lack of a broad understanding of the etiology of brain diseases. Examples of the diseases in this category (see Figure 1 .1 ) include Alzheimer's disease, Huntington's disease, Parkinson's disease, amyotrophic lateral sclerosis, HIV infection of the brain and brain tumors [71, 72] .
However, in the past few years, there has been a tremendous improvement in drug discovery for CNS diseases. According to a report from the Pharmaceutical Research and Manufacturers of America (PhRMA), approximately 313 diverse drugs for various CNS disorders An overview of market progress in CNS drug development (Data is based on the press information released by respective pharmaceutical companies till July 2012) (majorly addiction and depression) are currently in research and production pipeline. Additionally, the Tufts Center for the Study of Drug Development states that approximately 1747 drugs are in development for various other CNS disorders like multiple sclerosis and epilepsy [73] .
However, the physicochemical characteristics of most of these molecules are unfavorable for their effi cient transport through the blood brain barrier (BBB), one of the most diffi cult biological barriers, which hinder provision of an effective strategy for treating CNS diseases. The BBB consists of different cellular components, such as endothelial cells, pericytes, astrocytes, and microglial cells, interconnected with each other by tight, impermeable junctions. In presence of these natural barriers, drug delivery to the CNS may be performed using invasive methods or non-invasive methods or their combinations. Invasive methods can generate various complications, and are thus adopted only in cases of fatal diseases. In the non-invasive category, nanoparticulate drug carriers have demonstrated great potential for successful therapies [74] . The surface modifi cation of these systems allows them to be directed to specifi c receptors in the brain while their small size can be exploited for traversing the BBB. Additionally, nanoparticles also prevent structural modifi cation of the drug and can deliver the drug to the desired site in its original form. Other properties of nanoparticles, like biodegradability, low particle size (≈ 100 nm), prolonged circulation by surface modifi cation, receptor medicated transcytosis, large-scale production ability, possibility for loading of drugs/peptides and the ability to control the release of encapsulated active agent, make them ideal for targeted drug delivery to the brain [72] [73] [74] [75] . Table 1 .4 indicates various targeting strategies for brain targeting using nanoparticles.
There is an immense market opportunity for colon targeted drug products and it has been estimated that the US market for products of colorectal cancer will reach $6050 million by 2016, at a Compound Annual Growth Rate (CAGR) of 8.6% [98] . Colon targeted drug delivery has steadily grown for the treatment of various colon specifi c diseases as well as for the systemic delivery of macromolecules such as peptides.
Targeting the colon has become very important for treating diseases like infl ammatory bowel disease, Crohn's disease, colon cancer and ulcerative colitis. Our research group has specifi cally investigated the effi cacy of polymeric nanoparticles for colon cancer as well as for ulcerative colitis [99] .
Examples of brain-targeted nanoparticles at preclinical stage (modifi ed from [76] ) Nanoparticles for colon targeting can be formulated using natural or synthetic polymers. There is a wide range of polymers which have been established or are being investigated for their colon targeting potential. The colon targeted nanoparticles have demonstrated their ability to enhance the solubility, absorption and bioavailability of the encapsulated drugs [99, 100] . Additionally, colon targeted nanoparticles have also shown a huge potential for delivering peptides. Etiologies of various colon diseases have shown that macrophages are activated in course of the disease cascade because of immune response by the infl amed cellular structures in the colon. These activated macrophages can effi ciently take up the nanoparticles and this governs the accumulation of a large number of nanoparticles in the colon region. This accumulation, in turn, elicits a pronounced therapeutic effect from the colon targeted nano-systems [99, 100] .
Literature reports describe the employment of nanoparticles for the colon targeted delivery of various encapsulated moieties. Zheng et al. [101] have reported the incorporation of Thymopentin, a potent immunomodulating drug, into pH-sensitive chitosan nanoparticles coated with Eudragit S100 (ES100) to improve the stability and the oral bioavailability of the encapsulated agent. Uniform nanoparticles with a size of about 175 nm and a moderately high encapsulation of around 76% were formulated. The nanoparticles were found to effectively protect the encapsulated moiety from enzymatic degradation and prolong its degradation half-time. Results of lymphocyte proliferation test and in vivo evaluation in rats demonstrated that the nanoparticles could be used as effective vectors for the oral delivery of Thymopentin. Wang et al. [102] have reported the development of Cyclosporine A (CyA) loaded pH-sensitive nanoparticles of ES 100. Various in vivo studies conducted by this research group revealed that the nanoparticles increased the absorption of CyA, which could be attributed to a fast stomach empting rate, site specifi c absorption and lower degradation rate of the entrapped moiety by the luminal contents and the high bioadhesion of nanoparticles to the intestine mucosa. The authors have claimed that the investigation could be helpful for the design of dosage forms for other peptide or protein drugs. Researchers have reported the effi ciency of ES 100 nanoparticles as a favorable vehicle for the selective absorption of drugs in the gut, when administered by the oral route. This was proved by encapsulating Rhodamine 6G (Rho) as a model agent. The nanoparticles were evaluated for Rho release profi les, distribution, adhesion and transition in rat gut. It was observed that the nanoparticles decreased the distribution and adhesion of Rho in the stomach but increased these values in the intestine. Additionally, these nanocarriers were reported to control the drug release sites and release rate in the GI tract [103] . Scientists have also investigated the potential of ES 100 to improve oral delivery of HIV-1 protease inhibitors in dogs [104] . Incorporation of a HIV-1 protease inhibitor, HIV CGP 57813, into ES 100 nanoparticles was found to substantially increase the oral bioavailability of this compound after administration to dogs and achieve plasma levels comparable to those obtained in studies conducted in lower animals such as mice [105] .
The lungs present a promising route for drug delivery due to its noninvasive nature and possibility of systemic and local drug delivery. Pulmonary market is expected to increase by up to $24.5 billion by 2015, with a CAGR of 2.8% [106]. There is a growing need for novel therapeutic systems for the treatment of respiratory disease like tuberculosis, pulmonary hypertension, cystic fi brosis, asthma, chronic obstructive pulmonary disease (COPD) and severe acute respiratory syndrome (SARS). Pulmonary route is also being increasingly exploited for the delivery of peptides and vaccines. In view of this enormous need, nanoparticles can provide the right therapeutic option for the treatment of variety of respiratory diseases [107] . Nanoparticle systems in targeted pulmonary drug delivery offers various advantages such as uniform distribution of drug in alveoli, bypassing of fi rst pass metabolism, high solubility, sustained release property, reduced side effects, delivery of macromolecules and improved patient compliance. Lung targeting using nanoparticles can also be achieved through parenteral route. However, researchers have preferred the pulmonary route over the latter [108, 109] . A summary of various nanoparticulate carriers, intended for targeting the lung, has been presented in Table 1 .5 .
It is thus evident that nanoparticulate drug carriers cater to distinct needs and play defi nite roles in overcoming the 'lags' encountered with the conventional drug delivery systems. Numerous types of fascinating nanoparticulate drug carriers have stemmed from the research conducted during the last three decades. Also notable is the number of conditions where they may be applied, via different administration routes, for better therapeutic outcomes. With persistence from both academia and industry, some of these systems have successfully accomplished the cumbersome transition from laboratories to clinics. In this book, we intend to provide an overview of these different therapeutic nanoparticles, the challenges Examples of nanoparticulate carriers targeted to the lung (modifi ed from [107] [108] [109] Amphotericin B Phospholipid and apolipoprotein Lung infection [114] Amiloride hydrochloride Liposome Cystic fi brosis [108] Secretory leukocyte protease inhibitor Liposome Cystic fi brosis [115] Interleukin-4 antisense oligodeoxynucleotides Polymeric nanoparticle Asthma [116] Vasoactive intestinal peptide Protamine nanoparticle Asthma [117] Indomethacin, ketoprofen Solid Lipid nanoparticles Asthma [108] Antisense oligonucleotide 2'-O-methyl-RNA PLGA nanoparticles Lung cancer [118] Leuprolide Liposome Lung cancer [108] HLA-A*0201-restricted T-cell epitopes from Mycobacterium tuberculosis Chitosan nanoparticles Tuberculosis [119] V1Jns plasmid encoding antigen 85B from M. tuberculosis PLGA-PEI nanoparticle Tuberculosis [120] Insulin PLGA, PLGA-PEI, PEI, Chitosan, Alginate Diabetes [121] encountered in their transition to clinical settings, the solutions which may be adopted for surmounting these challenges and the success stories of some of these drug delivery nanoparticles.
|
The Chatbot can be defined as a software which help humans to make coherent conversation with machine using natural language like English, etc. The conversation can be engaging at times with large vocabularies and broad range of conversational topics. Recently, the usage of deep learning has increased in industry and Chatbot is one of its application [1] [2] [3] . Fig. 1 shows user using the Chatbot for its various application. This paper will help to create open-domain Chatbot, which can be later subjected to a particular domain, if needed as shown in Fig. 1 . It can be done by making changes in dataset, which means training model with particular domain knowledge. Due to open domain nature of the Chatbot, it can be used in making Artificial Intelligence Assistant which can make real life conversation with its user in any topic and situation. To make deep learning utilized by everyone, a major deep learning library Tensorflow is implemented by Google [4] and made available for use as an open source. Tensorflow [5] is Python-friendly library bundled with machine learning and deep learning (neural network) models and algorithms. The paper shows the formation of Chatbot by Neural Machine Translation (NMT) model which is improvement on sequence-to-sequence model. The Chatbot use Bidirectional Recurrent Neural Network (BRNN) [6] . The BRNN was chosen, as conversation or input to the Chatbot is dynamic, which means the length of input is unfixed. The BRNN is also supported by attention mechanism [7, 8] , which help to further increase capacity of model to remember longer sequence of sentences. The concept of Bidirectional Recurrent Neural Network, can be understand by taking two independent Recurrent Neural Network (RNN) [9] together, sending signals through their layer in opposite directions. So BRNN can be seen as neural network connecting two hidden layers in opposite directions to a single output. This helps the network to have both forward and backward information at every step, i.e. to receive information from both past and future states. The input is fed in one direction in normal time order, and the other, in reverse order. The concept of Extended Long Short Term Memory (ELSTM) [10] can also be used, with Dependent BRNN (DBRNN), as it help to increase the result by 30% on labeled data. The training of the BRNN is done in a same way as RNN, as two bidirectional neurons do not interact with one another. forward and backward passes are done [11] , then only weights are updated.
The Fig. 2 shows the general BRNN architecture with the hidden states for forward and backward direction. The variable 'O', 'I' and 'H' means 'Output', 'Input' and 'Hidden' states respectively. The values {X1, X2,. . ..,Xn} are input signals to the network and values {Y1, Y2,. . .. , Yn} are computed output signals from the network.
This paper also evaluates MacBook as a system for deep learning. Table 1 shows the system specification and other software details like operating system and version of TensorFlow used. Also the technology is getting upgraded every day, even if we take Central Processing Unit (CPU) and Graphics Processing Unit (GPU), which are becoming faster [12] . The system used is 2017 MacBook Air series by Apple. The laptop was at room temperature all the time of training the model.
There have been increase in development of conversational agent systems in commercial market like in retail, banking and education sectors. The research is being done, to improve accuracy and make conversation between Chatbot and user as close to real world conversations. Apart from traditional rule based technique, used earlier in Chatbot development and other straightforward machine learning algorithm, advance concepts and techniques like Natural Language Processing (NLP) techniques and Deep Learning Techniques like Deep Neural Network (DNN) and Deep Reinforcement Learning (DRL) are being used to model and train the Chatbot systems. In early days, the translation was done by breaking up source sentences into multiple token or chunks and then translated phrase-by-phrase. Sequence-to-Sequence has been a popular model based on neural network and deep learning, used in Neural Machine Translation [13] . It is used in tasks like machine translation, speech recognition, and text summarization. Sequence-to-Sequence model has an encoder and a decoder, both having Vanilla RNN [14] by default. The encoder take source sentence as input to build a thought vector. A thought vector is a vector space containing sequence of numbers which represent meaning of the sentence. Then finally, decoder process the thought vector fed to it and emit a translation called a target sequence or sentence. But Vanilla RNN fails when long sequence of sentences are fed to model, as information needs to be remembered. This information frequently becomes larger for bigger datasets and create bottleneck for RNN networks. The variation in RNN has to be used like BRNN with Attention mechanism or Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) [15] to handle failure in longer sequences. To understand meaning behind the sentence, the intention, facts and emotions described in the sentence, it must be analyzed. In [3] , the statistical difference was 63.33% to identify two groups or personalities on basis of their sentiments. The technique of NLP and machine learning helps to deeply analyze the sentence sentiments and make comfortable environment for humans to make conversation with machine. If dataset is in text, sentiment features can be classified as text level or document level [16] . The deep learning methods like Convolutional Neural Network (CNN) [17] and RNN are used in document sentiment classification, as it is the difficult one out of the two above mentioned classification. Also developing Chatbot for young adolescent, engaging them in their most preferred channel of communication, that are smartphones and successfully helping them to adult focused care. In [2] , the engagement time was 97%. The Kyoto-NMT is an open-source implementation of NMT paradigm [18] . It used chained, a Deep learning Framework. It has used two layer of LSTM with Attention model in its Recurrent Neural Network. It has also used whitespaces for making token in data preparation. The vector size used by it is of 1000. The training data is a sentence-aligned parallel corpus that is in utf-8 text files: one for source language sentences and the other for target language sentences. In this, a JSON file is created, containing all the parameters. SQLite is used for database creation and for keeping track of training progress. The bleu score is computed by applying greedy search on validation dataset. As the real translation was there (Japanese-to-English), the bleu score they reached is of 26
Currently, there are many performance metrics, and certain measurement standards are followed across industry for Chatbot [20] . Different organizations need Chatbot according to the nature of their work and market surrounding it. One of the most important performance metric for Chatbot is the structure and the length of its conversation. The length of output sentence must be appropriate and in context to the conversation being done. Shorter and simple the structure of sentence in output, faster the solution, does increase the customer satisfaction rate. To understand this metric with an example of banking sector, in banks, the Chatbot is mainly used to guide the user through the bank's policy, schemes and other customer inquiries about their account. This serve user to perform their tasks quicker, and also lower the human call assistance, thus cutting cost in the service. The consumer satisfaction also bring second metric-the retention rate, very important for Chabot success. The companies aim for significantly high retention rate, indicating customer satisfaction. The automated calls and Chatbot messengers are being used to replace other communication mediums (i.e. lowering call volumes by humans). The retention rate increases when Chatbot are more trained to support the user in managing their account without speaking to a human assistant. Another metric is the ability of the Chatbot, to produce the personalized reply to the user. This means that the Chatbot should take in the source sentence, understand and analyze it, and produce an output statement mapped to the particular problem or query of the user. The companies try to personalize and customize the output statement according to each user's need, like bank suggesting user a relevant offer or credit card scheme according to the balance in the user's account, salary, current loan and its spending history. For example, Erica, which is a Chatbot, is Bank of America's AI virtual assistant, which combine predictive analysis and NLP, to help its users to access their balance information, tips on better saving the money depending upon their spending habits, making transaction between accounts and schedule meetings at financial centers and banks. The e-market and Retail Chatbots make engaging environment for users to shop. Through their environment, the Chabot transform itself in a personal assistant for assisting in shopping. Unlike banking and financial sector Chatbots, Retail conversational agents are designed to look for higher number of conversational steps by holding the users attention, providing details and encouraging them to browse more and ultimately purchase the product. For instance, Ebay's ShopBot, help users to find best deals from its list of billion products. It is easy to-talk-to, like a friend, either if one is searching for a specific product or browsing to find something new. The above discussed studies shows network designed for small sized datasets and for short input sentences which are not fit for real life conversation as human tends to speak in longer sentences. To counter real world conversation, model like BRNN is important to know conversation context and references, from past as well as future. Attention mechanism is important attachment to the network as it help to weigh the particular references from the input sentences. Also for evaluating Chatbot performances there are no hard metrics [21] , but parameters like Perplexity, Learning rate and Bleu score can represent as to how close one can approach at the time of training the model.
The model, BRNN with Attention model not only help in short but also in longer tokens. Attention model help us to remember longer sequences at a time and also help in context problem where both historical and future information is required. In real world as language may not necessarily be in perfect sequence, sometimes one has to use context, hear full conversation before going back and responding to words leading up to that point. Human tends to speak in longer sentences to understand the meaning. This is the reason that makes combination of BRNN and Attention model, perfectly right choice for Chatbots. The BRNN structure forms the acyclic graph as can be seen in Fig. 3
To make prediction b Y attimet;which is an activation function g(n), computed as:
whereW y is the weight according to the input and magnitude set in the network, a ! t &a t ; are the forward activation at time t and backward activation at time t respectively. TheP y is the computed value (or predicted value) from the previous neuron in the direction, information is advancing. The activation function in neural network are the function attached to the node (or neuron), which get triggered when the input value in the current node is relevant in making prediction. There are many types of activation functions, but with BRNN, sigmoid and logistic activation functions are mostly used. For example, in the network shown in Fig. 3 , an input of couple of statements has been given. Statement 1-He said, ''Indira went to the market." and statement 2-He said, ''Indira Gandhi was a great Prime Minister." At an instance, when statement 2 was inserted into the network, at time step 3, where word ''Indira" was inserted, c Y 3 cell prediction or output signal is checked, and due to bidirectional in nature, the forward information flows from cell 1st and 2nd to 3rd cell, as well as, backward flow of information from 9th cell (through all cells in between) to 3rd, help the cell to predict that 'Indira' in statement 2 is Prime Minister and not 'Indira' who went to the market, in statement 1. If simple RNN have been used, the output must be according to statement 1, as the network didn't have the future information. This is because the RNN are unidirectional, i.e. with positive direction of time [22] . In attention mechanism, attention vector is generated by calculating score and then calculated vector is retained in memory, so as to choose between best candidate vectors. The score is calculated by comparing each hidden target with source hidden state. For applying attention mechanism, single directional RNN can be used above BRNN structure. Each cell in the Attention network is given context, as an input. This type of network is also called context-aware attention network [23] . The predicted value from each BRNN cell is taken, combined with value from the previous state (or neuron) of the attention network for calculating the attention value. One can also say that context is weight of the features from different timestamp. The Fig. 4 shows how the weighted value is taken in attention cell, as described above. The context C for cell 1in attention network can be computed as:
where a <t 0 > ¼ a ! t ; a t and Y <1;t 0 > is the value from activation function applied on BRNN for prediction on its each cell, that will be used as weight for context computation. Y <t;t 0 > is basically the amount of attention, Y <t> should pay to a <t 0 > .
The S n represents the states of the attention model, where n2 W. The S 0 is the primary weight, whose value is set according to the network. The b Z t at time step t is the attention value after computation from attention neuron where t2 N. The Fig. 4 is the extended structure, which when combined with BRNN in Fig. 3 to produce complete architecture for the deep learning network. This network has quadratic cost for time. As generally, in Chatbot, not much longer sentences or paragraph are used to converse, so cost may be acceptable. Though there are other research in this field to reduce quadratic time cost. The attention mechanism is one of the important methods in deep learning techniques, especially in area of document classification [24] , speech recognition [25] , and in image processing [26] [27] [28] .
The procedure for implementing methodology is depicted in Fig. 5 .
The Reddit dataset [29] has been used to make database for the Chatbot. The dataset contain comments of January, year 2015. The format of data is in JSON format. The content of dataset is par-ent_id, comment_body, score, subreddit, etc. The score is most useful to set the acceptable data criteria as this show that this particular comment is most accurate reply to the parent_comment or parent_body. Subreddit can be used to make some specific type of Chatbot like scientific or other particular domain Bots. A subreddit is a specific online community, and the posts associated with it are dedicated to a particular topic that people write about. The database formed after pre-processing the dataset have size of 2.42 GB. The database contain 10,935,217 rows (i.e. number of parent comment-reply comment pairs).
Now first, for training the model, database is required. So dataset is converted into a database with fields like parent_id, parent, comment, Subreddit, score and UNIX (to track time). To make data more admissible, take data (comment) which have less than 50 words but more than 1 word (in case reply to parentis empty). Also remove all newline character, '[deleted]' and '[removed]' comments, etc. If data (comment body) is valid according to acceptable criteria and has more score than previously paired comment to parent comment of same parent_id, then replace it. Also if encountered with a comment with no parent comment then it means, it can itself be parent comment to some other comment (i.e., it is main thread comment in Reddit). For database creation, the data have been paired in parent and child comments. Each comment is either a main parent comment or reply comment, but each have parent_id. Each parent comment and its reply comment has same parent_id. The pairs are made in accordance to the parent_id. In creation of database, the parent comment is mapped with its best child or reply comment. Any comment, either a parent or a child, have an acceptance score of two. When encountered with a new comment, if it matches the parent_id of previous entered reply comment to a parent body, then compare it with entered reply comment score. If current comment has better score than existing mapped reply comment, the replacement is done between new and previous reply comment and other associated data. If not the case, then the row remains unchanged. Further, if comment encountered has a parent body which is not yet paired with any reply comment, map the comment with its parent body, else if comment has no parent body, then create a new row for the comment, as the new comment encountered can be a parent to some other reply comment. On creation of database, 10,935,217 parent-reply comment pairs (rows) are created.
For training after creation of database, rows have to be divided into training data and test data. For both, two files are created (i.e., Parent comment and Reply comment). Training data contains 3,027,254 pairs and Test data contains 5100 pairs. There are also list of protected phrases (e.g.www.xyz.com should be a single token) and blacklisted words, to avoid feeding it to learning network. The training files are fed to multiprocessing tokenizer, as they are CPU intensive. The sentences will be divided into tokens on basis of space and punctuation. Each token will act as vocabulary. For each step, vocabulary size is 15000. The size is appropriate for systems having virtual memory of 4 Gigabytes. The RegX module is used for formulating search pattern for vocabulary. It is faster than standard library and it is basically used to check whether a string contain a specific search pattern. The neural network must be designed as mentioned above. Once the training starts, the main concerned hyperparameters (HParams) in metrics are bleu score (bleu), perplexity (ppl) and learning rate (Lr). Bleu score tells, how good the model is translating a sentence from one language to another language. It should be as high as possible. Perplexity is a measure of the probability distribution, or it tells about model prediction error. Learning rate reflects the model's learning progress in the network. As in this paper, language at both ends of the model is English, so Perplexity is more useful than bleu score. Learning rate is useful but only when model is trained with large data and for longer period of time. If model is trained for limited period of time or with less data, no significant change in learning rate will be observed.
Initially, the perplexity before training the model was 16322.15, learning rate was 0.001 and bleu score was 0.00. The average time used by above described system, per 1000 steps is between 4 and 4.5 h. If upper bound of time is taken, then for training machine till 23,000 steps, the system took 103.5 h. The perplexity, learning rate and bleu rate at step 23,000 is 56.10, 0.0001 and 21.67. The maximum value of bleu score model reached was of 30.16 at 18000 th step. The model also passed one epoch at 23000th step. The learning rate is low and negligible, as changes were made externally to the weights in the neural network, once the training started. The performance evaluation is shown in Table 2 . Also one can draw comparison of the performances of the Chatbot in [18] and in [19] with our result, as reflected in Table 3 .
The graph of perplexity and bleu score is shown in Fig. 6 . The other observations in Tensorboard like train_loss, decreases when model starts training, and once if train_loss starts increasing after reaching a minimum point, one should stop training the model, as very less or no change in model performance will occur. Thus, it describe that more and excessive training of model can lead to data loss. The smoothing of all graphs is done at value of 0.96 for better interpretation.
Speed graph in Fig. 7 demonstrate the system speed per 1000 steps. As mentioned above, no real translation is going on this Chatbot, but still value should initially increase. There will be decrease in value too at some points as no real translation is taking place. The perplexity should fall in every case. If it doesn't fall, it means the model is not getting trained properly. Also there will be not significant change in reply of NMT Chatbot. There is speed variation in value of speed of system as the speed depends upon overall task getting performed and other opened-up and running applications. From analysis and experience on system while working on experiment, the MacBook Air is just enough for basic deep learning model training, but not adequate. If one wants to go higher, and train some intermediate and advance model, MacBook Air (2017) hardware is not enough. There are insignificant change in latest model of MacBook Air, for instance MacBook Air (2020) or other MacBook Air series.
With help of test file, previously created for validating Chatbot's reply, Table 4 gives the comparison between the source dataset's test reply comment and the Chatbot's (NMT-Chatbot) reply after 23,000 steps of training. The test reply comment means the real world human reply to the test parent comment on Reddit. The NMT-reply is the output from the Chatbot. The eight sentences are randomly picked from the parent comment field of the database created for training. The sentences below have length range between 8 and 30, in terms of words. No punctuation in the sentences shown in Table 4 are added or removed.
A Chatbot using deep learning NMT model with Tensorflow has been developed. The Chatbot architecture was build-up of BRNN and attention mechanism. The Chatbot Knowledge base is open domain, using Reddit dataset and it's giving some genuine reply. In future, the model will be rewarded on relevant and sentiment appropriate reply. This will involve Deep Reinforcement Learning (DRL) technique. Also the methodology used in implementing and training the chatbot, can be used to train the specific domain chatbot, like scientific, healthcare, security, banking, e-market and educational domain. This approach will help building the chatbot in any domain easier and can improve the existing chatbot based on simple RNN architecture or other neural network by using attention mechanism as above. To implement domain specific chatbot (like healthcare, education, etc.), one can download specific Subreddit, of the particular domain. The future work will also include to build a healthcare Chatbot, guiding patient of diseases like COVID-19 (pandemic), Diabetes, High Blood Pressure and heart, etc. by providing information about the inquired disease, food one can eat and ways to deal with several emergency situations. This Chatbot will be powered by a recommender system too. In this paper, the novel idea was to analyze MacBook Air as a system to study and train deep neural network model. We find the MacBook air as mediocre and basic level system for deep learning. This result can help basic level students or other professionals to choose system wisely before starting with deep learning.
|
The detailed facts surrounding the coronavirus disease 2019 (COVID-19) pandemic are still evolving; however, one of the most shocking aspects of the COVID-19 pandemic is how lethal this condition is for the older population (Dowd et al., 2020) . The risk for death and severe illness with COVID-19 is best predicted by age. The likelihood of death increases exponentially with age among those who contract the virus in all countries where this has been examined ( Figure 1 ). Figure 1 shows the percent of confirmed cases ending in mortality, by age, for five countries near the beginning of June. In every country, the percent dying increases sharply after age 50, and the highest rates occur among the oldest persons. The age pattern is clear across the countries even though the mortality levels are quite different; the United States has had a much greater number of cases and deaths than the other countries in this figure, but the mortality level was higher in Italy. This difference in levels could be influenced by the proportion of diagnosed cases, which depends on testing, treatment of cases, and whether COVID-19 deaths include only those confirmed with a diagnostic test or include both confirmed and probable deaths (Sung & Kaplan, 2020) . Even with these differences, the pattern of an exponential increase in death with age is clear.
There are several likely biological explanations for why COVID-19 is more deadly for older persons, and why older people have much higher death rates no matter what the level of overall infection is. The first is that older persons may be more vulnerable to getting the disease when exposed to the virus because of changes in the immune system with age. Immunosenescence increases markedly with age; as COVID-19 is a novel virus, not before encountered, having more available immune cells to fight it is important (Nikolich-Žugich, 2018). As age increases, the availability of naive T cells and the ratio of CD4/CD8 T cells to address any new pathogen become depleted, and this depletion has been linked to poor responses to COVID-19 (Aviv, 2020) . The prevalence of low CD4/CD8, often used as an indicator of immunosenescence, increases markedly with age in a large, representative sample of the American population; it is 3 times higher among those older than 80 than it is for those in their 50s and 60s ( Figure 2A ).
In addition to having less ability to fight off a novel virus, other aspects of immune functioning also may be worse for older people. Hyperinflammation has been linked to poor outcomes with COVID-19 due to "cytokine storms," or an out-of-control immune reaction that can overwhelm systems and lead to death (D'Elia et al., 2013; Liu et al., 2016; Wang et al., 2020) . Hyperinflammation has been characterized by increases in levels of specific cytokines (Blanco-Melo et al., 2020; Vabret et al., 2020; Zhang et al., 2020) . Levels of dysregulation in five cytokines by age among Americans over 50 are shown in Figure 2B . The average number of dysregulated cytokines doubles from ages in the 50s to the 80s. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http:// creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact journals. [email protected] Public Policy & Aging Report cite as: Public Policy & Aging Report, 2020 , Vol. XX, No. XX, 1-5 doi:10.1093 A third factor making COVID-19 more serious for older persons is that they are more likely to have underlying conditions, such as heart disease, hypertension, diabetes, and lung disease, and COVID-19 mortality is higher for those with underlying conditions (Centers for Disease Control and Prevention Conronavirus Disease 2019 Response Team, 2020). These conditions may be linked to COVID-19 because they are associated with greater expression of angiotensin-converting enzyme 2, which is a protein by which COVID-19 viruses bind to cells. In addition, these conditions are linked to higher levels of inflammation. In New York City, 88% of deaths among confirmed COVID-19 cases occurred among people who had an underlying condition; this percentage did not vary much by age, as it was 82% even for those under 17 years of age (New York City Department of Health, 2020). Underlying conditions increase with age to become more common in the older population. Figure 3 shows that at every age group over 50, more than half of Americans have either heart disease, lung disease, diabetes, stroke, or hypertension; by age 70, four out of five people have at least one of these five conditions. These biological changes linked to aging and morbidity are one of the reasons deaths have been concentrated among older persons around the world.
Biological factors may strongly affect how people respond to infection with COVID-19, but social rather than biological factors primarily determine the likelihood that people of different ages get infected with COVID-19, get diagnosed with the disease, and get treated in a timely fashion. In addition, the level of infection in a country or geographic area appears to be highly related to policies and behavioral responses in the population, as well as macro social and economic circumstances. The age structure of who gets infected also depends on differences in social contact and living arrangements by age. Within the United States, one of the most affected groups by COVID-19 has been members of the Navajo Nation (Navajo Nation Department of Health, 2020). Cases and case fatality rates are especially high among older Navajos, who tend to live with many members of their extended families, many of whom may be regularly exposed to people outside the home because they need to work, but live in dwellings with small numbers of rooms. In Italy, the fact that older persons were well integrated into families both socially and residentially was thought to expose them to more disease and even higher death rates than in other countries (Dowd et al., 2020) . The amount of contact with infected persons is a factor promoting infection, so the less the contact, the less likely an infection will result.
Older members of the Navajo Nation also are much more likely than people in the rest of the country to be of low economic status and to live in dwellings without running water, which makes it difficult to wash hands frequently, one of the most important anticontagion activities. Low socioeconomic status can also be attached to having more difficulty social distancing, because of living arrangements and being in contact with more people due to household members having an essential job.
The most egregious example of the selective impact on older adults has occurred within nursing and other residential care facilities. Residents of these settings are most at risk of getting COVID-19 and dying from COVID-19 in the United States and in a number of other countries (Fallon et al., 2020) . A relatively reliable estimate in early June was that 42% of all COVID-19 deaths in the United States had occurred in nursing facilities and other long-term care residences (Girvan & Roy, 2020) . These homes have the deadly combination of older residents who are in poor health, are in close contact with both staff and other residents, and live in confined spaces. This, coupled with inadequate testing for COVID-19 among residents and staff and inadequate infection control, has caused this disaster. The proportion of all COVID-19 deaths in each state occurring among those in nursing facilities and other long-term care facilities is shown in Figure 4 . The estimates are probably low, as some states do not report deaths for those whose deaths were not diagnosed with testing; some states have not fully disclosed facility deaths; and some states are not even included, as they do not report deaths by residence (Paulin, 2020) . For 26 of the states shown in the figure, the proportion of all deaths from COVID-19 in the state that occurred among residents of long-term care facilities is at least 50%. In addition, it is suspected that untested formal caregivers and other relatively young asymptomatic workers moved from one facility to another, spreading disease (Furuse et al., 2020) . The overall lack of testing, infection control, and the high levels of viral exposure from extremely close contact among staff and residents over extended periods of time contributed to such a rapid spread of cases.
At the time this was written, the United States was undergoing a renewed surge in COVID-19 cases in the South and the West, while control of the pandemic was being gained in the Eastern states with initially high levels of cases (e.g., New York, New Jersey, Massachusetts). It remains unclear how long this initial wave of the pandemic will last, when a second wave will come again, or whether a vaccine will be available in the coming year. Meanwhile, research to better understand why the United States has become a world leader in the number of COVID-19 cases and mortality has only just begun (U.S. Government Accountability Office, 2020).
The delayed ability to test for COVID-19 at the beginning is widely accepted as an initial challenge that got the United States on the wrong path. The Centers for Disease Control and Prevention was not ready with an accurate and available test, resulting in very few people being tested in the initial months; this contributed to cases spreading rapidly across the country. In comparison, countries with a greater capacity to test in the early phases of the pandemic were able to contact trace and contain the spread of the disease more effectively (e.g., Singapore, Japan, Taiwan, Hong Kong, and Canada). Other countries also used fairly severe "lock downs" and issued quarantines to effectively limit the spread (e.g., China, New Zealand; Wilasang et al., 2020) .
Another early problem in the United States was the lack of personal protective equipment (PPE) for medical workers (U.S. Government Accountability Office, 2020). The United States did not identify sources of supplies early, nor did the federal government oversee the acquisition and distribution of supplies across the country so as to target distribution to those states and localities in greatest need. PPE remains in short supply in many parts of the United States even now, and this has been especially problematic for nursing facilities.
In conducting an initial assessment of what went wrong in U.S. nursing facilities, Senators Casey, Peters, and Wyden (2020) reported that it took nearly four months after the first outbreak in a Washington state nursing facility before data collection on facility-specific deaths was federally mandated. While initial efforts involved "locking down" nursing facilities, the lack of resident and staff testing and quarantines did little to ameliorate the situation. These homes still remain short of testing capability and PPE, and what has occurred in other long-term care settings (e.g., assisted living) still remains unclear. Without accurate and timely data, a targeted and effective public response will continue to lag, and efforts to mitigate COVID-19 outbreaks may continue to be unnecessarily inconsistent from one facility to the next.
Perhaps most notable was how, in contrast to other countries, the current administration decided to substantially reduce federal support for the administration and financing of the nation's public health infrastructure and withdraw from international partnerships designed to identify and coordinate responses to such worldwide public health emergencies. Had the current administration maintained (or even expanded and improved) the nation's public health infrastructure and international partnerships, the development of international supply chains and the efficient distribution of tests and PPE arguably would have been improved.
Interestingly, several of the other "more successful" countries that experienced the severe acute respiratory syndrome epidemic in 2003 responded by increasing national investments in public health infrastructure, and were better prepared for COVID-19, as coordinated plans and response approaches to this infectious pandemic were already in place (Sung & Kaplan, 2020) . Whether federal and state policy-makers learn from this current experience and begin planning and acquiring resources to prepare for the next pandemic or other public health emergency remains to be seen.
The U.S. Government Accountability Office (2020) indicated that one of the lessons for our government to learn concerns the benefit of "establishing clear goals and roles and responsibilities for the wide range of federal agencies and other key players," as any pandemic, by definition, is a national emergency. The Government Accountability Office also emphasized the need for public officials to follow scientifically based approaches to alleviating the disease, and "provide clear, consistent communication among all levels of government, with health care providers, and the public" so that recommended public heath practices are less likely to be subject to varied interpretation of rules, regulations, and recommended practices across states and localities, and to mitigate the politicization and corresponding dissemination of misinformation across news and social media (U.S. Government Accountability Office, 2020, "Providing" page 3, para.19).
Finally, when considering older Americans in particular, a post-COVID-19 presidency should embrace how changes linked to biological aging and the social lives of The most egregious lack of response in the U.S. was what has happened in nursing and residential care homes.
older Americans placed them at higher risk for morbidity. This understanding then may lead to assigning a greater value to investing in basic scientific studies. On one hand, there is a need to better understand the long-term effects of having COVID-19 on the health trajectory of those individuals who were infected: do older persons with COVID-19 experience organ damage from the infection or have their immune systems become compromised and place them at even greater risk when the next viral outbreak occurs? On the other hand, the federal government should invest in basic and applied research to better identify, protect, and treat those older Americans most at risk from future infectious outbreaks and other public health emergencies.
Support for this work was provided by the National Institute on Aging (P30 AG017265).
|
As policy responses to the challenge of anthropogenic climate change continue to intensify across the world, a new geography of lowcarbon energy infrastructures is rapidly emerging [1] [2] [3] . While these new infrastructures operate at multiple sites and across many scales, reflecting a broad diversity of low-carbon technologies, local actors, such as community and civic energy organisations, are embedding new energy technologies within their communities in an era of growing energy decentralisation and distributed energy generation [4] [5] [6] . This process of energy decentralisation has been gaining traction across the UK for over a decade, with both community energy [7] and civic energy sector [8] actors increasingly engaging in energy markets still dominated by market players that have formed an oligopoly over the UK energy market [9] . One UK energy minister famously stated that the government was keen to see a shift from the 'Big six to the Big 60,000' [10] , noting a trend towards a diversification of energy market competition and the proliferation of a new set of local energy market actors [11, 12] . However, energy decentralisation has grown in complexity and in scope, featuring a variety of cross-sectoral actors collaborating across multiple levels of governance [7, 8] . Rather than being a core focus of UK energy policy, it is still somewhat marginalised and lacking a comprehensive regulatory framework and strategic direction. In addition, the UK policy framework for decentralised energy has begun to shift away from community energy towards 'local energy' more broadly, in which partnerships between public and private actors, alongside emphasis on the role of local authorities, play a key role in energy decentralisation initiatives and processes [13, 14] .
Such developments in decentralised energy markets and the rapid growth of renewable energy technologies have been spurred in part by legislative developments, such as the EU Renewables Directive 2009, the Energy Act 2008 and the UK Climate Change Act 2008, that have ensured an increase in renewable energy generation capacity through subsidy schemes supporting renewables and decarbonisation targets aiming at significantly lowering the UK's carbon footprint by 2050 [12] . The introduction of the UK's Feed-In tariff (FIT) scheme in 2010, in particular, has supported the growth of community renewables projects focused largely on solar [15] and wind energy [16] , as it provides guaranteed 20-year payments for the generation of renewable electricity. In addition, the cost of renewable energy technologies, such as solar PV and wind turbines, has been significantly reduced in recent years, with commercial-scale renewable energy power generation set to be cost-competitive by 2020-2022 [17] . This is due, in part, to their rapid uptake within global energy markets over the past decade, the creation of new economies of scale and global systems of production supporting widespread market penetration and increased global investment [17, 18] .
While a new host of actors, institutions and market players are set to benefit from this transition, we are simultaneously living in an era of unprecedented social and economic inequality [19] . Thus, when seeking to explore the various ways in which climate change and social inequality intersect, it is vital to contrast the new wave of innovation, economic prosperity and era of growth brought about by the lowcarbon transition, with the worsening of social inequality in contemporary society. Such a contrast is particularly pertinent to the UK.
At the same time as renewable energy generation reaches record highs [18] , alongside low-carbon energy generation accounting for the majority of electricity production in the UK [20] , austerity measures have reconfigured the social landscape of the UK. Austerity -seen here as a macroeconomic shift involving widespread fiscal cutbacks of state spending that has been definitive of both UK politics and local-level politics since 2010 -has been widely criticised for its divisive impact on society and negative impact on social equality generally [21, 22] . Recent research shows that social inequality has worsened in the UK, as central government has pursued an austerity agenda that drastically cuts back vital public expenditure for services such as education, health and welfare provision. Over the last decade, this austerity agenda has been shown to be devastating for low-income families, communities and regions across the UK [23] . Thus, in a context where the UK has one of the highest levels of inequality amongst the developed OECD countries, policy research points towards evidence of huge wealth disparities in the UK; the richest 1% of the UK population are wealthier than the poorest 50% combined [24] , 13 million people have been classed as living below the UK poverty line [25] and a 'cost of living crisis' has meant that the poor have become worse off in recent years, as 'income inequality has fallen back to levels last seen one or two decades ago' [26 p.12] Hood and Waters [27] predict that income inequality is projected to rise between the period 2017-2021, while UK poverty rates are to remain roughly unchanged during the same period. However, the onset of a new economic crisis brought about by the COVID-19 pandemic is drastically reshaping future forecasts for increases in social inequality, whilst also exposing a multitude of existing inequalities in the UK through disproportionately affecting the worst-off [28] .
Combining the critical nature of the above areas presents new terrain for researchers at the interface of low-carbon energy transitions and social science research, as critical questions around the spatial distribution of new energy infrastructures and their embedding within landscapes of social inequality and material deprivation [29] present new challenges to those seeking to mitigate against the worsening of social inequality as a result of energy transition processes. Existing research that examines the interplay between new local energy systems and deprived communities 1 demonstrates the relatively exclusive aspect of localised energy schemes, with more affluent communities with the necessary time, resources and capacity typically most likely to engage in, benefit from and develop their own local low-carbon energy schemes [30] [31] [32] .
However, there is also emerging research that looks at the opportunities and benefits for low-income and deprived communities that are arising from low-carbon transition processes [33] , such as the material and wellbeing benefits arising from home energy efficiency schemes [34, 35] , advice provision to low-income areas on energy usage /bill reduction [36] and revenue generation from subsidy-backed renewables deployment [37] which feeds into the development of 'community benefit funds' to support local organisations and local economies [38, 39, 16] . In a highly unequal society/societies, the spatially uneven distribution of such low-carbon transition processes and the ability of disparate and divergent communities to benefit from such processes reflects a form of social inequality previously unseen and little explored. Thus, this paper offers novel insights into new forms of social and spatial inequality -and opportunities to rectify and address those inequalities -that arise from local low-carbon energy transitions.
In exploring this little researched terrain, the aim of this paper is to draw upon energy justice theory to investigate the broad social impacts of low-carbon transitions in Bristol city, in a time in which increased public funding cuts and fiscal austerity measures are adversely affecting social inequality. In seeking to contribute this original insight, the paper draws on a distinctly spatial approach to understanding the interplay between an increase in local energy generation via energy decentralisation and social inequalities. Thus, it is important to briefly address what previous and current literatures have had to offer when critically analysing local energy schemes.
Catney et al [30] , in a powerful study of two different communities' engagement in energy schemes in the west midlands in England, critically explored the social impact of community-led energy schemes. Seeing that highly unequal capacities for differing communities to engage in local energy schemes would be critical for their initiation, the authors suggest undertaking what they call a 'reality check' when exploring the potential for a deprived locality or community to engage in low-carbon energy generation projects [30] . In this, they put forward three different points of analysis; taking stock of a community's social capital; assessing community capacity and understanding their cultural capacity, as they note that 'for people living in poverty in deprived areas, personal responsibility for carbon emissions may be the furthest thing from their minds' [30 p.11] . Additional analysis of Catney et al.'s [30] case studies drawn from the West Midlands, in which they compare two relatively affluent wards to two highly deprived wards, sheds light on the critical importance of geographical differences and disparities underpinning socio-economic inequalities. Indeed, Bridge et al [1] see energy transitions themselves as a fundamentally geographical process, arguing that the energy transition 'pathways' governments choose will shape the future geography of transitions. In their analysis of space and place in low-carbon transitions, they see the uneven and unequal landscape for the deployment of a diverse array of low-carbon technologies as embedded in 'spatial difference', emphasising that;
There are significant opportunities […] for understanding the relationship between different trajectories of energy transition and the geographical conditions from which they emerge [1 p. 339] This spatial difference is not only intimately connected to a physical geography of resources, but also to the capacity of different regions, local authorities and communities to engage in energy transition processes. Interestingly, more recent studies have taken this explicitly geographical focus and have integrated spatial justice [40] approaches into energy justice. Scholars have found that local and 'area-based' policy solutions have the potential to remedy geographically uneven patterns of energy injustice [29 p. 646] , whilst this geographical variation has been shown to be present in the spatially uneven deployment of renewable energy technology [41] and in distributing the benefits of decarbonisation unevenly between households and localities [42] . Bringing these literatures together, Bouzarovksi & Simcock's [29] call for scholars to acknowledge 'landscapes of material deprivation' when considering processes of energy injustice within energy systems, demonstrates strong connections to Catney et al's [30] analysis of the inability of deprived areas to engage in potentially beneficial forms of local energy activity. This highlights the socio-economic implications of Bridge et al.'s [1] 'spatial difference' and connects to De Laurentis & Pearson's [41] examination of spatial unevenness and unequal local capacities in low-carbon energy transitions more specifically. Additionally, While and Eadson's [42] analysis of socio-spatial disadvantages arising from decarbonisation processes evidences the disproportionate effect of rising energy prices and job losses on lowincome areas, whilst also shedding light on the relative exclusion of low-income households from benefitting from the FIT scheme [42 p. 1635 ].
Building on these relations between space and energy injustice, Golubchikov and O'Sullivan [43] and O'Sullivan et al [44] have introduced the concept of 'energy peripheries' that are integral to geographically uneven energy transition processes. Connecting to notions of energy vulnerability and drawing on an analysis of unequal lowcarbon transitions in Wales, both papers emphasise the extent to which transition processes reproduce distinctly spatial injustices and reinforce pre-existing spatial hierarchies. In addition, Yenneti et al. [45] demonstrate how the implementation of a large solar park in India leads to spatial injustices for vulnerable communities dependent on the land where the solar park is installed, highlighting a process of unjust land acquisition for renewable energy deployment as part of the low-carbon energy transition.
It is apparent from these literatures, therefore, that space and place are integral features of unjust low-carbon energy transitions. Thus, there are clearly strong overlaps and powerful interconnections at play when we see that the geographies of both social inequalities and the physical siting of new energy generation infrastructures are deeply interlinked. This also connects to previous insights derived from an explicitly geographical focus on environmental justice research, whereby 'first generation' understandings of environmental injustices relate to the close proximities of disadvantaged communities to the geographical site of environmental injustice [46] . This overlap and interplay will only become more important for enhancing our understanding of energy policy responses to climate change as energy decentralisation continues to increase as a technological response to the mitigation of climate change, as well as featuring as an important component of the global energy transition.
In order to make theoretical sense of this complex interaction between differing levels of community capacity and its relationship to spatial difference, an energy justice framework is drawn upon to help illuminate the core social aspects of the case study featured in the following sections. Moreover, the first generation understanding of 'proximities' outlined above [46] is drawn upon here and extended to the energy justice field, whilst also enriching energy justice perspectives on low-carbon transitions [47] . Thus, four principles of energy justice are used here to assist with this analysis; (1) procedural justicerelating to the participation of people in energy-related decision-making processes; (2) distributional justice -which concerns the sharing and distribution of energy system benefits and burdens; (3) recognition justice -which seeks to ensure the acknowledgement of marginalised and/or disadvantaged groups in relation to energy systems, and (4) restorative justice -a process of remediation in response to a perceived energy injustice [48, 49] . In the analysis of the findings, these four tenets of energy justice act as thematic guides, providing a useful framework through which to sort, categorise and analyse the qualitative data collected. In addition, notions of 'space' and 'place' -and indeed ideas of 'spatial justice' -permeate throughout the research findings.
Next, this paper draws on the research methods used to engage with this vital theoretical overlap, using a Bristol case study as a reference point.
While this paper draws upon an energy justice framework to shed light on the interconnections between the geographies of both social inequalities and the physical siting of new energy generation infrastructures, the empirical data used to support this insight is derived from two of the traditional techniques of PAR -in-depth interviews (n = 10) and participant observation. This data was collected over an 18-month period from mid-2015 to early 2017, forming part of the data collection process of a PhD thesis [39] . Participant observation was used firstly as a means to gain familiarity with civic energy communities and networks in Bristol and secondly, to record key discussions and occurrences at events in and around Bristol. After attending many local events and connecting with various civic and community energy actors in Bristol throughout 2015 and 2016, appropriate participants were approached to gain a familiarity with all actors involved in the Lawrence Weston area of Bristol after realising two energy projects were present within the area. In addition, it is vital to note that Lawrence Weston is one of the most deprived parts of Bristol and in the most deprived 10% of areas in England [50] , 2 acting as a critical backdrop for advancing understandings of the empirical links between local lowcarbon transition initiatives and deprived communities.
The case study mostly revolves around primary data collected during in-depth interviews with five key organisations present within Bristol's civic energy network, focusing exclusively on their involvement with the Lawrence Weston community. These are; Ambition Lawrence Weston 3 -ALW (n = 3), Low-Carbon Gordano -LCG (n = 2), Bristol Energy Co-operative -BEC (n = 2), Bristol City Council -BCC (n = 1) and Bristol Energy Network -BEN (n = 2). In addition, two solar PV projects associated with this case study, namely; LCG's -Moorhouse Solar Array (MSA) and BEC's Lawrence Weston Community Solar Farm (LWCS) are critical to the contestations at the heart of this case study. A total of 10 interviews are featured in this case study, with one research participant featuring in two organisations. Presented below, in Table 1 , is the identifier system for these organisations and actors, with unique identifiers assigned to each organisation and the associated participants interviewed, with anonymity ensured for all individuals involved.
The data thus draws on research with and on civic energy actors in Lawrence Weston, however, the level of participation in the organisations themselves varied throughout the period of data collection. After integration into Bristol's energy communities, subsequent attendance at 2 For consistency in the definition of deprived communities, it is important to note that [64] , state that the 'Indices of Deprivation 2019 have been produced using the same approach, structure and methodology used to create the previous Indices of Deprivation 2015′ [64] , p. 7. 3 Ambition Lawrence Weston are a local regeneration charity set up in 2012 that seek to improve the lives of residents in the local area after a decline in local services. More information can be found here: https://www.ambitionlw. org/.
the ALW Planning Group meetings established deeper connections with Lawrence Weston residents, through forging links with key members of ALW. Despite this 'functional' level of participation in ALW's activities [51] , the in-depth semi-structured interviews, which lasted between 50 and 90 minutes, were conducted in a more classically 'extractive' fashion, where researchers seek to obtain knowledge and insight from key actors through a set of flexible pre-determined questions. However, room was given for research participant involvement in contributing understandings of local energy justice. While the participant observation technique was used to build connections to local energy communities, the in-depth interviews facilitated a much deeper conversation with research participants to explore the complexity of individual experiences and relationships between different organisations, alongside how these experiences and relationships relate to contesting local energy transitions and issues of local geography. In the in-depth interviews, participants expressed a shared interest in energy justice and saw the applicability of the theory in practice, offering their own interpretation of energy justice and describing how tenets such as distributional and procedural justice apply to their own experiences, activities and respective organisations. This collaborative approach to research contributes to local and bottom-up perspectives on energy justice and issues of spatial (in)justice in relation to community energy, one of the core contributions of the paper. In addition, the analysis offered in this paper was presented to members of ALW (ALW1-3) and Low-carbon Gordano (LCG1) for feedback in April-June 2020 in the interests of transparency and accuracy, as well as to receive updates on the progress of ALW and LCG. After this follow-up communication had taken place via email, the participants confirmed the accuracy of the paper and this continued communication also contributes to the participatory and open spirit of a PAR approach in academic research. Given the 'local' scale of the paper's fieldwork, the use of PAR is intended to forward the energy justice and low-carbon transitions research agenda [52] within Bristol.Theoretically, the paper also harnesses energy justice's analytical power to provide insights into the role of geography and its critical interconnections to social inequalities in local energy transitions. Lastly, it is also important to acknowledge here the limitations of the data, as ALW1 and ALW2 are both lifelong residents of Lawrence Weston and active members of ALW. In the presentation of the data below, both ALW1 and ALW2 are referred to as residents of Lawrence Weston and members of ALW simultaneously, as they both occupy dual roles. Finally, ALW3 worked for both BEN and ALW in two separate roles, representing both organisations in the data presented.
The origins of the case study stem from a heated debate between a resident of Lawrence Weston and a Director of LCG. This was observed during the participant observation phase at an event in early 2016 held by BEN in central Bristol. During this event, one important recorded note summarised the nature of this dispute:
This recorded dispute formed the foundation of this case study, proving critical to many of the topics discussed in the follow-up indepth interviews with actors from the five organisations outlined above. Further activities that arose on the basis of this dispute also demonstrate tremendous resonance with different aspects of the three core tenets of energy justice, alongside the more recently proposed tenet of 'restorative justice' [49] , expanded upon further in Section 4.1.4.
During the data collection period (2015-2017), the city council approved and supported the installation of two different solar arrays on council-owned land in and around Lawrence Weston -the 'Moorhouse solar array' (MSA) and the 'Lawrence Weston Community Solar Farm' (LWCS). The MSA, organised by LCG, consists of 7200 solar panels that produce enough annual electricity for around 500 homes. With a £500 minimum share offer, just over £2 million pounds were raised through a community share offer developed by LCG in 2014, and the project has been fully operational since April 2015. The project received technical support from local renewables company Solarsense (http://www. solarsense-uk.com/), based on the outskirts of Bristol, and was praised by the then-incumbent Mayor George Ferguson, who also attended the launch of the new solar installation, pictured in Fig. 1 below: Whilst the MSA was widely supported as a key part of Bristol's lowcarbon future, the project exhibited very little initial involvement with Lawrence Weston, despite its close proximity to the community, as detailed in Fig. 2
In contrast, the LWCS farm, organised by BEC, is based on Lawrence Weston road and resides more decidedly 'within' the community's territory. The LWCS farm consists of 4.2 MW of annual solar generation capacity that is enough to power 1000 homes -close to double the capacity of the MSA. The project has been fully operational since June 2016 and alongside a solar farm in Puriton, is part of two of BEC's key solar projects that raised over £9 million in total through public share offers, with the opportunity to purchase a £50 minimum share as part of this fundraising scheme. The LWCS farm received support from ALW, as seen in Fig. 3 below:
The LWCS farm was developed, in part, in reaction to claims of injustice by ALW. As will be shown, BEC partially reacted to the claims made by ALW against LCG and against the city council's granting of planning permission to LCG in such a short timeframe, given their desire to be eligible for the FIT rates at the time to ensure their business model worked and the rate of return to investors guaranteed. Indeed, BEC sought to create a more just solution to local energy deployment, and enhance engagement through an 'active participant' model, whileas will be explained further towards the end of the research findings -LCG have commendably attempted to involve themselves more closely with ALW in response to these claims of energy injustice against LCG's development of the MSA.
Underneath these shifting relationships and processes of remediation lie contestations over spatial injustice in the form of low-carbon siting dynamics [3] . These siting dynamics fundamentally underpin the claims of injustice made against LCG by ALW. As this paper will demonstrate, these justice claims are intimately tied to claims surrounding the proximity of projects and their closeness to the community of Lawrence Weston; a community that has felt as though it is on the geographical, economic and social margins of the city. Moreover, Lawrence Weston is shown to be significantly affected by austerity through the powerful statements of the Development Manager at ALW in section 4. As evidenced in the 'Lawrence Weston Community Plan' (2018-2023) [53] , it is clear that the impact of austerity on the local area is a key consideration going forward:
In the next five years we will face changes and challenges in Lawrence Weston. The government's ongoing austerity programme will undoubtedly result in more cuts to local authority spending. This means that Bristol City Council will provide fewer services and support for residents. It will also mean less funding and grants for organisations in the voluntary and community sector [53 p. 6] Thus, the politics of austerity in the local area feature as a core concern for ALW. Drawing on secondary sources to generate further insight into the development of current and future relations between ALW and LCG, the paper will demonstrate that it is the overtly geographical aspect of spatial proximities that informs the energy justice disputes at the heart of these transition processes. In addition, wider concerns about the economic impact of austerity on the local area, as demonstrated above, force into focus the emergence of new low-carbon activity in deprived parts of Bristol, alongside the prospects of economic opportunity that this brings to new spaces and places in a time of austerity and economic crises.
The following subsections address issues of both energy injustice and justice, detailing the extent to which participants from ALW felt that processes of non-recognition, alongside a lack of inclusion within consultation measures around the installation of the MSA, led to claims of both recognition and procedural injustice. After using recognition and procedural justice tenets to explore these tensions, the following subsections explore instances of energy justice through the lens of distributional and restorative justice, drawing on the impacts of the LWCS farm on Lawrence Weston in a time of austerity, as well as changing relations between ALW and LCG. The following insights reveal the extent to which energy justice is capable of helping scholars to critically understand the complex nature of the politics of emerging local lowcarbon transitions in areas of high deprivation.
As mentioned in the sections above, Lawrence Weston has long been recognised as an area of high deprivation and furthermore, is representative of some of the stark social, economic and geographical inequalities in Bristol. As a result, the Lawrence Weston community has felt a sense of recognition injustice for some time, with persisting issues including crime, poverty, low quality housing, poor transport links into the city and high levels of unemployment. 4 This sense of injustice connects quite intimately to issues of environmental injustice throughout Lawrence Weston's history. As Fig. 2 showed, Lawrence Weston is located close to Avonmouth, an area that was historically a host to various industries in the mid to late 20th century, producing vast amounts of pollution that impacted upon surrounding areas. One local resident recalls how this industry and associated pollutants were once the norm amongst the local community:
It's all dirty industry then so you had smelt works, you got Britannia Zinc, chemical plants and it was just accepted. Back in the day that was it, you had these big funnels and they're bellowing out dirt, dust and other pollutants […] there wasn't a lot of concern given at that time because that was how people were working and getting a living. Historically, air pollution and environmental injustice has been quite bad, really (ALW1)
As these industries began to go into decline or move operations elsewhere, one resident noted a significant improvement in the air quality of the local area: While the presence of local industry clearly brought economic benefits to the residents and families of local areas, this history of environmental injustice connects strongly to an ongoing scepticism within the community that is rooted in the lived experience of local residents and history of perceived environmental injustices. One participant noted that other parts of Bristol haven't inherited this sense of continuing injustice against the community, whilst also possessing a greater capacity to object to imposing and potentially damaging infrastructures:
There is a level of upset as soon as there is a mention of a power from waste burning plant. Or any other waste. There's a level of sensitivity. There is also a sense of disempowerment, whereas in other parts of Bristol […] immediately -there's an electric response amongst the community 'we're going to oppose this!' Here there's a much more ready -a belief that nothing can be done -that 'they're doing it again! (ALW2)
Fundamental to the recognition justice tenet is the acknowledgement of marginalised and deprived communities in energy systems and transitions, which also applies to the distribution of both environmental ills and environmental 'goods' [46, 54] . While renewable energy infrastructures, such as wind and solar installations, are often described as environmental 'goods' due to their contribution to CO2 emissions reductions, their imposition on local communities and landscapes, without some form of consultation and approval, potentially generates new forms of injustice [55] .
Interestingly, ALW's Energy Project Officer, who as stated above, is also a member of BEN, was conscious of this history of non-recognition in Lawrence Weston, stating that: 'not only have they not benefited, they've also been recipients of poor air quality, noise, and numerous amounts of health impacts without that being recognised and well supported' (ALW3). ALW's Energy Project Officer therefore sought to use community renewables as a means to counter this history of the community existing at the margins of Bristol, alongside bringing new economic opportunities to the local area. The residents/members of ALW therefore felt that both LCG and BCC had ignored the community when seeking to deploy the MSA, which threatened to repeat some of the mistakes of the past, in which the interests and voices of the local community are consistently ignored:
A planning application was brought forward and was well advanced for putting the solar farm in. Without any consultation with us -a neighbouring community -let alone as a neighbourhood planning forum […] it was a significant development, it was right up against the boundary of the planning area and I felt they had simply ignored the community -the planning authority had completely ignored the community (ALW2)
In addition, another participant felt that other new low-carbon energy infrastructures, including the MSA, were deployed close to Lawrence Weston without recognition of the local community:
The solar farm at Moorhouse, the wind turbines, local authority solar farm and wind turbines, we didn't get to hear about any of that. Only when we realised that there is a benefit for us getting involved, then we remonstrated and got highly involved, really (ALW1)
While these residents of Lawrence Weston / members of ALW felt that local low-carbon energy transitions were failing to recognise a community within close proximity to new infrastructures, alongside seeing the potential benefit for greater involvement in transitions, a director of LCG felt that ALW's claims of injustice were unjustified:
In contrast, a director within BEC acknowledged that there was an issue of non-recognition in relation to new energy infrastructures around Lawrence Weston, and saw this as an opportunity to foster deeper engagement, financial support and new relations with the community via the LWCS farm:
We have a 10-year plan to rejuvenate the community. They're surrounded by energy -they're right in the shadow of the wind turbinesthere's a whole load of energy plants down there in Avonmouth -on the whole they haven't benefitted from any of it really. They are just sitting right in the shadow of it […] we're working very hard to ensure that surplus profits are going directly to them' (BEC1)
While these different approaches of LCG and BEC are clearly strongly opposed to one another, similar to the claims of injustice made by ALW, the difference in these approaches can be linked to contrasting conceptions of the geographical boundaries and indeed, contested geographies, of energy infrastructure siting. For example, when questioned on some of the claims made by ALW around the siting of the MSA, a director from LCG responded by stating that:
'
However, this geographical separation, while recognised by ALW, was not sufficient enough to justify the non-recognition and exclusion of Lawrence Weston:
'There wasn't any recognition […] The solar farm at the moment and the wind turbines aren't really in our geographical area, or our border area, but it's so close to our border, I think we are affected by it' (ALW1)
'They should've been here and the community feel that they should've been included much more formally […] the fact that it is just there. I think there is a general principle there as well' (ALW2)
Interestingly, LCG themselves admitted that they could've done more initially, and that a sense of recognition injustice pervaded ALW's claims of injustice:
This data demonstrates the extent to which recognition justice is such a vital tenet within the energy justice framework, providing grounds upon which both the directors of community energy schemes (LCG & BEC) and members of ALW are able to voice their concerns around energy injustices in low-carbon transition processes. Key to this sense of recognition injustice and non-recognition was the city councils and LCG's failure to consult the community and include them in any decision-making procedures surrounding the implementation of the MSA. This is explored further in the next subsection addressing procedural injustices in the development of the MSA.
Recognition justice can provide the foundation upon which both distributional and procedural justice can be realised. Thus, it would logically follow that in the case of non-recognition, instances of procedural and distributional injustice can arise. In the case of procedural injustice in relation to the MSA, the speed and short timeframes through which LCG had to act, in order to counter impending FIT scheme reductions, are instrumental to their failure to include ALW in their initial stages of decision-making and consultation. LCG also had to rely on voluntary directors for outreach work, which significantly reduced their capacity to connect with communities within close proximity to the MSA. Furthermore, LCG sought suitable sites for solar PV outside of North Somerset (South West England) due to the organisations location in a hostile local authority area that proved highly sceptical and unsupportive towards new low-carbon energy projects, as made clear by both directors in the interviews:
'I don't think they believe in communities here in North Somerset. The council here are dreadful. They've got in our way more than they've helped us […] we've really struggled to get any traction. We've had no support from them' (LCG1)
In addition, a director within BEC was sympathetic to the hostility LCG faced from their local council when speaking about the development of the MSA in Bristol:
Furthermore, this issue around timing and non-recognition was further reiterated by the Energy Project Officer at ALW:
Despite these admissions, a member of ALW / resident of Lawrence Weston was adamant that the local community were ignored and that Lawrence Weston were excluded from consultation measures:
This subsection shows that ALW's sense of procedural and recognition injustice was amplified by the lack of consultation with the local community and ultimately the non-recognition of Lawrence Weston by LCG when they installed the MSA. This sense of non-recognition also extended to the city council as well as LCG, who approved planning permission for the installation of the MSA in such a short timeframe. While the restorative justice subsection details the efforts made by LCG to enhance community relations with ALW, the above two subsections have detailed some of the energy injustices associated with new, emerging low-carbon energy infrastructures in Bristol.
The next subsection concerning distributional justice turns to a focus on BEC's LWCS farm, seen here as a partial reaction on behalf of both the council and BEC to some of the claims of both recognition and procedural injustice made in the above subsections. It also details the distributional impacts of the LWCS farm on ALW, exploring what this means in a time of austerity.
Following the tensions between ALW and LCG outlined above, it would appear that these occurrences had both a direct and indirect impact on future low-carbon energy initiatives in Lawrence Weston. Indeed, much of the interview data suggests that BEC and BCC acted, to some degree, to rectify these injustices. For example, when questioned on the relationship between BEC and the city council with regards to the development of new projects, an Investment Manager at the city council was keen to emphasise his support for the LWCS farm:
Weston road, right […] that one we're really, really focusing on […] that would be a very large chunk of community owned asset there, which would be very exciting for the city (BCC2) Furthermore, while BEC had secured a partnership with ALW during the planning of the LWCS farm, they also sought to emphasise the bottom-up nature of ALW's involvement:
It is really the ordinary people, local people that are driving that. There's no doubt. If you go to a meeting you'd be in no doubt that that is the case (BEC2)
While claims of injustice against LCG around the MSA influenced the council and ALW, it would be inaccurate to ignore the wider impacts of austerity on Lawrence Weston. Furthermore, it would be wrong to assume that both the city council's drive to secure a local energy project in Lawrence Weston and ALW's drive to assist the regeneration of the local area through involvement with the LWCS farm were only driven by this. Rather, as attested to by ALW's Energy Project Officer, the origins of ALW itself lie in broader inequalities and injustices in the city:
This subsection on distributional justice therefore moves beyond criticism of the MSA and directly addresses the impact of austerity on the local community, while detailing the contribution of BEC to ALW and considering the ways in which an 'active participation' approach to local energy schemes can extend distributional gains to localities.
During the in-depth interviews, austerity was shown to be a significant concern for key members of ALW, who, when questioned on the material and financial implications of austerity in the local area, noted the severe impacts of austerity measures since the introduction of fiscal cutbacks in 2010:
I think the biggest impact it can have on us is service provision and lack of it, community cohesion, more vulnerable people being created […] especially in the housing market […] I think it will all impact on that. Crime, antisocial behaviour, drug use, more alcohol dependency simply because they are coping mechanisms to cope with all these cuts (ALW1)
Thus, ALW sought to prioritise new forms of economic activity that would benefit the local area to counter some of these harmful occurrences within the local community as a result of austerity. This localisation of new economic activity was therefore a key driver for ALW that closely aligned with BEC's desire to localise the economic benefits of low-carbon energy infrastructures. In addition, the city council were also supportive of this localisation agenda in Lawrence Weston:
(footnote continued) current council are now comprised of cross-party councilors that are much more supportive of renewable energy and LCG, with a director of LCG also elected to the council. I think what is important is that any benefits that flow from the project become locally sited. I think that one of the benefits of the way that the finance on Lawrence Weston road is structured, is that there are lenders involved who are obligating the project to pay out to Ambition Lawrence Weston because of their proximity to the project (BCC2 Therefore, the location of the LWCS farm more decidedly 'within' the community led to concrete distributional benefits for ALW, through direct payments to the organisation from BEC's surplus revenues as detailed by ALW1:
We'll be getting £155,000. Payment schedule is £43,000 up front for the first year […] then £23,000 for the next four years. In addition to thatso, that's the upfront payment -a minimum of £8000 a year from the yield from the solar farm (ALW1)
This demonstrates the extent to which community energy projects can move beyond offering benefits to investors, to supporting local organisations that are contributing to the regeneration of their local economy, as reiterated by a director within BEC:
People don't have to be invested in it to get some of that benefit -it will be going to ALW who are doing projects for the whole community. That's the way we get our benefit out there (BEC1)
In addition, the creation of the LWCS farm and associated community benefits during a time of austerity proved highly valuable for ALW as an organisation going forward:
The beauty of this […] is that it's totally unrestricted. So, we can use it for whatever is needed to deliver our community development plan. Which to me is a godsend […] in these austere times, it's an absolute luxury to have available to us £155,000 plus £8000 a year that could be spent on core funding should we need to, but ultimately to have that money unrestricted to spend it on the needs of the local area is absolutely brilliant (ALW1)
This data reveals the extent to which community energy models can support local organisations and local economies, particularly in regeneration and development efforts. The findings above show that this is key to the distributional justice impacts of local energy infrastructures in deprived areas. However, while this level of community engagement and involvement certainly provides a stark contrast to LCG's non-recognition of Lawrence Weston, particularly in a time of austerity, it is not without its criticisms.
After some form of distributional justice had certainly been achieved through ensuring that ALW were supported financially as an organisation, rather than the economic benefits remaining the preserve of affluent investors, questions arose around what exactly this money was then going to support and how. Discussion around moving beyond a charitable 'passive recipient' approach to one which ensured the 'active participation' of local community residents and ALW's members therefore followed, alongside general questions around the apportioning of surplus revenue. While BEC's support for ALW signalled a milestone for community energy directly supporting the regeneration of a deprived urban community, further examination of these links brings forth a certain politics around the allocation of surplus revenues. A participant from the city council noted that who decides on this allocation is crucial:
Where does that last tranche of funding go, once everything else has been paid? I can't comment on that because that is the job of that board of directors, and the shareholders. If they are all middle-class shareholders -well, you know, they would come up with a different answer than if they are all living in Lawrence Weston, which as we know, is quite a deprived area generally (BCC1) Indeed, this assumption was proved correct, as from the perspective of ALW2, the agreement reached between ALW and BEC around surplus allocation was unsatisfactory:
50% of the surplus that is generated will come to Lawrence Weston and 50% will go to the BEC Community Energy Fund, there was never really any negotiation with the community about that […]
Furthermore, key to this idea of proximity requiring a greater level of community engagement and involvement, ALW2 also sought to emphasise the need for deeper relations with the local community, moving beyond grants and awards to more active participation in the training, upskilling and empowerment of the local community: Far from merely clarifying and understanding who benefits, this emphasis on an 'active participant' approach raises questions around how local communities benefit once local energy schemes facilitate engagement with the communities within close proximity to their associated infrastructures. This emphasis also connects strongly to social science perspectives on energy transitions that advocate for a move away from passive to active approaches to citizen involvement in energy transitions [56, 57] . The next subsection explores this move 'beyond passive recipients', whilst also addressing the changing relationship between ALW and LCG in response to claims of injustice via the lens of restorative justice. Thus, the following restorative justice subsection will make clear that developing relations between ALW and LCG have partially addressed issues of distributional injustice around the MSA.
As outlined in Section 2, the concept of restorative justice in energy justice, stemming from the work of Heffron and McCauley [49] , relates quite broadly to a process of remediation in response to a perceived energy injustice within an energy system or as part of an 'unjust' energy transition process. This process of remediation may take place through formal or informal action, or through appeal to legal processes and procedures to ensure that justice is achieved. Drawing on secondary sources in the form of information sourced from both LCG's and BEN's websites, it is clear that ALW, LCG, BEN and BEC have worked together to facilitate deeper forms of engagement between the Lawrence Weston community and local low-carbon energy projects. Thus, this restorative justice tenet is relevant to this case study in two senses.
Firstly, LCG have now incorporated ALW into their community benefit activities. Secondly, both BEC and LCG are wary of the need to move beyond a passive recipient approach to community energy that has indeed become the norm within the sector, whereby few community energy organisations offer training opportunities in relation to the development of new low-carbon energy infrastructures. The development of an active participant approach therefore still stands as the exception, rather than the rule, as attested to by a Co-Director within BEN when speaking about BEC:
I think the energy co-op -although arguably you can say that as a bunch of, sort of white middle aged, middle-class techie types -they are very conscious of what they are doing, and so working with organisations like Ambition Lawrence Weston, they are trying to create something that does deliver in a more inclusive way (BEN1) LCG have recognised this and started to think of new ways to engage the Lawrence Weston community in some of their community benefit fund activities, as one director stated that 'finding a way of enabling the people who are, in this particular case local to the Moorhouse, to benefit, is actually something that we'd be very keen to do' (LCG1). This is demonstrated by LCG involving ALW more closely in their activities: 'we've had a meeting with Low Carbon Gordano, and I'm now on the panel as an Ambition Lawrence Weston representative and as the new energy officer' (ALW3). This desire for closer involvement is evidenced by the community benefit section of LCG's website, which builds on an active participant approach:
Ambition Lawrence Weston, representing the community close to our Moorhouse array, are going to train local, currently unemployed, people as energy advisers to help householders and businesses use energy more efficiently, and are working with local companies to ensure that there will be employment opportunities for the trainees after the project [58] While LCG clearly sought to use some of their community benefit to assist local residents within Lawrence Weston, it is interesting to note the mention of the proximity of Lawrence Weston as a contributing factor to changing their relationship with and acknowledgement of ALW. This shifting relationship also connects to assistance from other actors within Bristol's local energy network, with BEN working with LCG to deliver on this active participant approach:
Gordano Community Benefit Fund. The grant was used to fund a community internship programme, with local long-term unemployed people working on various community energy projects in the Lawrence Weston area [59] ALW3 then responded to this need to advance forms of deeper engagement with the local community, and proved instrumental to the delivery of this internship scheme:
We basically recognised, talking to marginalised groups […] that people didn't access green volunteering in energy, because they basically felt they couldn't afford to do so. By creating an internship, which creates job opportunities, linking them to potential employers and giving them life skills, that would actually open a door to other opportunities (ALW3)
In addition to this, BEC wanted to encourage ALW to use their contributions to fund training activities within the local community, as noted by a member of BEN when discussing the passive recipient approach to local community engagement and support: Furthering the creation of new economic opportunities in a time of austerity, this aspect of restorative justice connects powerfully to distributional justice and a focused, targeted approach to delivering the benefits of the low-carbon economy to deprived areas.
This subsection has shown that, due to the claims of injustice around the proximity of the MSA to Lawrence Weston, alongside passive recipient approaches that fail to fully engage and involve communities close to energy infrastructures, the tenet of restorative justice can be used to understand how LCG have sought to rectify these past claims of injustice and assist ALW in wider regeneration efforts.
This paper has demonstrated how the LCG Moorhouse solar development is initially viewed as somewhat of an imposition that is not benefitting the community in a time when services are being cut. This imposition is therefore particularly pronounced, with the context of both austerity and high deprivation placing greater emphasis on localising wealth generation, rather than allowing a leakage of profits to external areas. This localisation of wealth generation and the activities that flow from new revenue generation are an particularly important distributional justice concern, whilst further correspondence with ALW demonstrated that low-carbon energy would feature as a vital part of its continued regeneration going forward. Interestingly, this is attested to throughout Lawrence Weston's community plan [53] .
Building on the follow-up communication with ALW in mid-2020, ALW were successfully granted planning permission by the city council to deploy a 150m 4.2 MW onshore wind turbine in the local area, which will be owned by ALW and will generate revenue for the local community. The projects revenues will also support the establishment of an 'energy learning zone', which will offer energy internships, energy events, workshops and focus on raising the skills of local residents [53 p. 54] . It is clear, therefore, that for both 'energy' and 'spatial' justice to be realised in local low-carbon transitions, deprived areas in which new energy infrastructures are deployed must be given the opportunity to benefit from those infrastructures, alongside opportunities for procedural engagement. These new infrastructures may then form a core part of their future regeneration efforts and plans, bringing a further cycle of benefits and localised revenue generation.
Once deployed, community-owned low-carbon infrastructures are in place for decades. This article has shown that how the organisations behind them choose to relate to the communities and people that surround them will shape the energy justice impacts of low-carbon transitions going forward. Surprisingly, the majority of community energy literatures and policy reports have, to date, seen community energy schemes as largely beneficial to the localities in which they are located, with much research to date containing 'uncritical assumptions' that community energy overwhelmingly leads to positive outcomes [60] . This paper has shown that, without careful consideration of the spatial hierarchies and inequalities embedded in the spaces and places in which low-carbon energy transitions take place, new injustices may occur that undermine the ability for local energy transitions to be socially just. Furthermore, the backdrop of austerity measures reducing service provision and worsening social inequality in an area of high deprivation, should encourage energy justice scholars to further explore the potential of new low-carbon energy infrastructures to be embedded within community or organisational strategies for regeneration. This is a particularly pertinent point in light of the impacts of the economic crisis faced by the UK due to the COVID-19 pandemic.
As Catney et al.'s [30] 'reality check' reminds us, deprived communities are not primarily concerned with lowering their carbon emissions. Rather, this paper shows that opportunities to combine lowcarbon transitions with economic development and local regeneration appeal to deprived areas. Rather than simply making grants to local organisations and businesses, which is a ubiquitous feature of UK community energy organisations, local and community energy projects would themselves benefit from ensuring that they offer training to local residents in the installation, management and governance of local energy technologies and systems. In addition, this may also assist with gaining planning permission [55] . These issues also broadly connect to debates around just transitions in local energy schemes [60] , whereby new low-carbon jobs, procedural engagement and opportunities for learning are ensured for the communities in which future projects are situated. As community energy schemes begin to plateau and local energy schemes take a leading role in energy decentralisation processes [13] , it will be vital for local energy strategies and policies to consider active participant approaches in policy.
As FIT's are locked in for 20 years, the findings also have relevance for the policies of current community energy schemes that seek to use their surplus revenue in more productive ways, moving beyond opening up competitive bids for grants to local organisations and towards supporting local employment and skills training. In addition, advancing an active participant approach is integral to building the 'community capacity' [30] needed to ensure the effective long-term management and governance of local energy technologies and systems, particularly as new innovations emerge. The UK does offer a 'Community Energy Specialist' apprenticeship 6 , however, this has not been widely used or supported to date. Community energy schemes with large surplus revenues could engage with this this more fully, alongside forming partnerships with local authorities, energy intermediaries [33] and local energy networks to access finance and funding to scale up this type of activity in tandem with the shift from community to local energy more broadly.
Intermediaries are also vital for raising awareness of the potential benefits of low-carbon energy in areas high in deprivation that are close to deployed low-carbon technologies. Interestingly, two of the research participants (ALW1 and LCG1) noted in their interviews, that an area in North Bristol close to Lawrence Weston, Henbury, contains similar levels of deprivation to Lawrence Weston [61] , but was not involved in local energy projects. Despite its proximity to newly deployed technologies, the area of Henbury did not receive benefits from any of the community-owned solar or wind deployment outlined in this paperthough this may well change in future initiatives. This finding places an increased emphasis on the important work of intermediaries in connecting low-carbon transitions to low-income areas that would otherwise miss out on such opportunities, as intermediaries have been shown to be effective forums and mediums though which these concerns may be voiced, acted upon and resolved [33, 62] . Policy frameworks for local energy should therefore recognise the critical roles that intermediaries play in supporting energy justice concerns and prioritise them in outreach work and relevant strategies. Furthermore, this point also relates to how future local energy schemes can relate more productively to marginalised 'energy peripheries' going forward [43, 44] , alongside being integral to targeted and area-based approaches to spreading the benefits of future local energy schemes to communities in areas of high deprivation.
The geographical underpinnings of many of the claims made in the findings are closely tied to contestations over the proximity of energy infrastructures. This connects to two fundamental aspects of spatial justice within a broader discussion of energy justice. The first, is that many of the grounds for claims of both injustice and more just relations by ALW are based upon the proximity of projects to the area of Lawrence Weston. Therefore, various civic and community energy actors should be acutely wary of the spatial proximity of their projects to deprived communities in future endeavors, considering how best to consult and engage with local communities to raise awareness, seek planning consent or spread the benefits of new energy infrastructures, drawing on the support of relevant intermediaries where possible. The second, is that through the deployment of solar PV farms close to the community, LCG expressed a desire to facilitate greater distributional justice through becoming an energy supply company that would provide low-cost electricity to fuel poor houses in Lawrence Weston. This kind of new energy supply set-up also relates to the closer proximity of Lawrence Weston to the source of energy generation, thereby reducing potential transmission losses and the transmission distance of electricity that is common to power provision within centralised grids. Thus, further distributional gains may be achieved, should a system of decentralised provision offer the opportunity of lower energy prices and a reduction in transmission losses in power provision. Such arrangements will require the critical lens of interdisciplinary energy justice scholars to understand how to embed social justice and equity concerns in future energy decentralisation initiatives, alongside interrogating how such initiatives perform in varying research contexts [63] .
As energy decentralisation begins to take on new technological forms, particularly via the integration of various energy storage technologies into local energy grid systems enhancing the prospect for continued renewables deployment, the potential for local low-carbon supply futures to benefit local populations will become a critical concern for energy justice scholars. This will prove a vital area of future research for two reasons. Firstly, rapid energy market developments around the growth of energy storage technologies, flexibility markets, vehicle-to-grid services, smart meter deployment and smart grid development present a new wave of innovation in decentralised energy system development that also present a new set of challenges. One core issue that arises from such technological innovations will look at how social innovation -in the form of new social enterprise models and actors seeking to capture the values of such innovations for the benefit of the wider community -can counter the domination of incumbent market players and prevent the reproduction of the social and economic inequalities that undermine new low-carbon economies. Secondly, if we are to mitigate against exacerbating social inequality in a post-COVID 19 society, in which social inequality is once again brought to the fore after a severe economic downturn, such concerns must be prioritised as we continually research the impact new low-carbon infrastructures have on the people and communities that exist around them.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
That science, research and medicine strive for the benefit of humankind is a truism. That they are the products of the societies in which they operate, and thus reflect the privileges and shortcomings of those societies, is also a factone that often remains in the background of scientific discourse. The unvarnished reality is that inequalities cut deeply through biomedical science and patient care, with a devastating impact on individuals and communities.
The cancer field is not immune to such disparities. Racial, ethnic and socioeconomic factors weigh heavily on cancer prevention, diagnosis, access to and quality of care and, ultimately, outcomes. In the USA, members of minority racial and ethnic groups suffer disproportionately from cancer. This is well documented for many affected communities, including Black Americans, who experience higher cancer mortality rates 1 than those of the white population. For colorectal, prostate and female breast cancer in particular, both incidence and mortality are higher for Black people 1 . Black patients also have lower participation in clinical trials, even when these are testing treatments for cancer types that are highly prevalent in their population. As a result, they are denied access to potentially life-extending therapies, and clinical findings become skewed toward a non-representative, white majority. Race is a social construct that does not bear a clear relationship to genetics and biology. However, the lack of a sampling diversity that corresponds to the real-world population remains a pervasive concern in research, as it can obscure links with disease traits and therapy response and thereby reinforce health disparities 2 , as highlighted in the field of human genomics 3 .
The lack of appropriate representation is also reflected in the composition of the medical workforce, with only 5% of physicians and 2.3% of oncologists self-identifying as Black or African American, despite the fact that Black Americans make up 13.4% of the US population. Similarly, minority ethnic or racial groups reportedly comprise only 3-7% of biomedical research faculty in the USA 4 , even though they are better represented at the doctoral and postdoctoral levels.
The causes of these disparities are complex and include historical and structural racism, implicit racial and social biases, entrenched economic, educational and healthcare inequities, and the cultural and behavioral trends of communities and individuals. When considering these latter behavioral factors, it is important to acknowledge the mistrust of many African Americans toward the medical system in light of the exploitation and discrimination to which they have been subjected historically. The notorious Tuskegee syphilis study is one such example, as is the case of Henrietta Lacks in the cancer fielda story that is as much about the general lack of bioethical standards at the time as it is about suffering and dying of cancer in the racially segregated US society of the 1950s.
The scientific community has been working toward understanding and addressing these issues. In the USA, the National Institutes of Health (NIH) have a long-held policy on the inclusion of minorities in clinical research. The more recent launch of the NIH-driven All of Us research program also aims to engage participants from traditionally underrepresented groups so that contributed health data will be representative of the diversity in the US population 5 . Among other efforts by the American Association for Cancer Research, the 2020 by 2020 initiative aims to address cancer disparities in the African-American population by collecting genomic and clinical data from 2020 African-American patients with cancer. The US National Cancer Institute's Center to Reduce Cancer Health Disparities is dedicated to decreasing the disproportionate cancer burden in society through research, training, education and mentoring efforts. The American Society of Clinical Oncology has announced a strategic plan to increase racial and ethnic diversity in the oncology workforce. On a political level, policies such as the Affordable Care Act have enhanced healthcare equity by expanding coverage and access to medical care to disadvantaged groups, including Black people 6 . Affirmative changes in policy could in fact drive rapid and meaningful change. It has been estimated that 22% of the cancer deaths in 2018 in the USA could have been avoided had these patients had access to and quality of health care and treatment similar to that of college-educated people 7 .
Such efforts are important, but they are not yet enough. From access to diagnosis, treatment and care, to population representation in the patient cohorts that inform research findings and drive clinical discovery, racial and ethnic minorities remain disadvantaged around the world. The COVID-19 pandemic has shone new light on these health disparities. Although race and ethnographic data continue to be limited, emerging analyses show that in the USA, minority communities, including African Americans and Latino Americans, bear an unequal burden when it comes to infection and mortality rates 8 . The underlying reasons await detailed study, but the fact that historical and systemic inequality and discrimination have been affecting these communities in terms of financial means, access to healthcare insurance and diagnostic and treatment centers, quality and security of housing, and their ability to avoid virus exposure through work, cannot be denied.
Against this backdrop of health inequity, rooted in large part in racial and social exclusion and discrimination, came the recent killing of George Floyd, an African American man, at the hands of police. This was not an isolated or unprecedented incident but rather one event in a long string of injustice and brutality due to racial discrimination in the USA. At the time of writing, on the day of George Floyd's Minneapolis memorial service, the words of the poet Paul Laurence Dunbar come to mind: "a pain still throbs in the old, old scars / and they pulse again with a keener sting. " Systematic and institutionalized racism continues to ripple through society, and its corrosive effects cannot be ignored. Inequality and discrimination have many incarnations, but one of their common drivers is the neutrality of a majority, the indifference and passive acceptance that perpetuates injustice. The street protests that have swept through the USA during the past ten days have reignited a much-needed public dialog about racism. Biomedical science has an important part to play in this discussion. As nations reopen their economies and the research enterprise gears up again, we should not seek to restart, picking up from where we left off and returning to our own definition of normality. Rather, we should redouble efforts to alter the conditions that permit inequity to persist in academia and the industry, in healthcare systems and clinical practice, and in science communication and publishing.
How do we catalyze this change? How do we transform the heartfelt expressions of solidarity into something more tangible than words and more meaningful than symbolic gestures of inclusivity? A first, essential step is to give the voices that are being drowned out the space, attention and respect they are due. We need to listen, learn and reflect. Moreover, beyond denouncing the overt, poisonous hate that sustains racism, we need to talk about the latent intolerance, the implicit biases, the passive neutrality and the entrenched privilege that permit racism to persist. They are more difficult to discern and therefore are the hardest to uproot. Recognizing and addressing their presence in our daily lives and the social and professional structures in which we operate is essential. It may also be uncomfortable, but that is a good thing. Change does not come from a place of comfort. We must remember and act on this long after the street protests end and the news cycle moves on.
Nature Cancer stands with the Black community and minority and underrepresented groups against discrimination and intolerance. We are committed to supporting work on health disparities and diversity and highlighting these issues through our pages. We pledge to increase diversity in our reviewer pool and to amplify the voices of underrepresented minority authors. Finally, we promise to continue educating ourselves, so that we may contribute to the efforts to level inequalities in a meaningful manner. To that end, we welcome the comments and ideas of our readers at [email protected]. ❐
Published: xx xx xxxx https://doi.org/10.1038/s43018-020-0091-x
|
One year post discharge, mild to moderate pulmonary dysfunction was observed in the majority of patients. Further, 54.2% of patients had signs of severe abnormal pulmonary function, including diffusion disorder (33.3%) and small airway dysfunction (33.3%). Fourteen patients presented with respiratory tract infection symptoms; 12 with abnormal pulmonary function and two with normal pulmonary function. Our results indicated that the change in pulmonary function at one year post discharge was not significantly correlated with the severity of H1N1 influenza.
Signs and symptoms of abnormal pulmonary function accompanied by respiratory tract infection symptoms remain for some patients after one year following discharge from the hospital for mild influenza A virus subtype H1N1 infection. These patients should continue Introduction Influenza A virus subtype H1N1, a pandemic 2009 strain, caused widespread outbreaks of influenza in humans. As of 17 June 2010, more than 214 countries had reported confirmed cases of infection with pandemic 2009 influenza A (H1N1) virus [1] . Patients typically presented with severe pneumonia and acute respiratory distress syndrome (ARDS), which led to severe lung damage and in some cases death. After recovery from severe pneumonia and ARDS, various degrees of lung lesions occur, having an impact on patients' respiratory function and in turn his or her quality of life. In this study, we examined the pulmonary function of patients infected with influenza A virus subtype H1N1 one year after hospitalization for the infection. These results provide valuable information for future diagnosis and rehabilitation treatment of H1N1 and other pandemic or severe influenza strain infections. [2] . To ensure patients were not examined during or shortly after airway infections, all participants answered a questionnaire detailing any complaints of dyspnea, tiredness, cough, expectoration, medical treatment and smoking habits. The Modified Medical Research Council Dyspnea Scale was used to evaluate dyspnea of patients with abnormal pulmonary function (a score of 4 points, 2 cases; 3 points, 4 cases; 2 points, 14 cases;1 point, 4 cases; and 0 points, 2 cases) and with normal pulmonary function (a score of 4 points, 2 cases; 3 points, 2 cases; 2 points, 8 cases;1 point, 10 cases; and 0 points, 2 cases). Of these 48 patients, 38 were diagnosed by members of the Department of Respiratory Medicine and ten were diagnosed by members of the Department of Infection. The study included 26 male and 22 female patients with an average age of 29.5 years (range 27-39.5). Of the original 102 patients, eight (7.8%) had died: one from pneumonia and seven from disorders that could not be attributed to pulmonary disease. Fortysix (45.1%) patients were not re-examined due to practical problems. However, based on the data from 2009, these 46 patients did not differ from the 48 re-examined patients with respect to age, sex, disease duration, or degree of pulmonary function. Patients with chronic respiratory system disease (i.e. chronic obstructive pulmonary disease, asthma, pulmonary fibrosis, silicosis), chronic heart disease, or nervous and mental diseases were excluded. Written informed consent was obtained from each subject.
The experimental protocol was established, according to the ethical guidelines of the Helsinki Declaration and was approved by the Human Ethics Committee of Jilin University, China. Written informed consent was obtained from individual participants.
Approximately one year (±1 months) after recovery from influenza and discharge from the hospital, each patient included in the study was assessed for pulmonary function using the Mas-terScreen PFT system (Jaeger, Germany). The indices for pulmonary function as part of this test include: Tidal volume (VT), vital capacity (VC), flow-volume loop, forced expiratory volume at 1 second (FEV1), maximal mid-expiratory flow (MMEF), forced expiratory flow at 50% and 75% (FEF50, FEF75) and maximum voluntary ventilation (MVV). The indices for pulmonary diffusion function include diffusing capacity of the lungs for carbon monoxide (DLCO) and diffusion rate. Patients rested for 30 minutes before testing, and tests were performed in duplicate for each patient with the higher of the two values being included in the study.
Respiratory tract infection symptoms (e.g. cough, expectoration or gasping), vital signs and pulse oxygen saturation (SpO2) were evaluated for each patient. Retrospective evaluation: Clinical data including respiratory tract infection symptoms and vital signs [3, 4] , chest CT, blood gas analysis, mechanical ventilation, and the presence of secondary infection was also retrospectively analyzed at one year post hospital discharge. Results from the current clinical testing and from the retrospective analysis were correlated with the severity of H1N1 influenza infection. When subjects' naïve pulmonary function tests were normal by routine blood test and physical examination at the one-year follow-up test, chest X-rays showed no significant change.
Statistical analysis was performed using SPSS 17.0 (SPSS Inc., Chicago, IL, USA). P < 0.05 was considered statistically significant. Numeration data were expressed as the incidence rate, a chi-square test was used, data were expressed as mean ± standard deviation, and parametric statistics was used.
In order to assess the potential long term effects of mild H1N1 influenza infection patients were first assessed at approximately one year following recovery and hospital discharge. At this time, 29.2% (14/48) were observed to have obvious respiratory tract infection symptoms and 41.7% (20/48) had difficulties in performing physical activities. Pulse oxygen saturation was greater than 95% in all patients and no abnormal vital signs. We then tested each patient for pulmonary function and found 45.8% (22/48) had normal pulmonary function while 54.2% (26/48) had abnormal pulmonary function, all presenting with changes of mild to moderate H1N1 influenza. Several changes caused by abnormal pulmonary function were found, including diffusion disorder, small airway function disorder, and weakened storing function (Table 1) .
Of the 22 patients having normal pulmonary function, each had respiratory tract infection symptoms while six were observed to have a decreased ability to perform general physical activities. Of the 26 patients tested to have abnormal pulmonary function, 12 had respiratory tract infection systems and 14 had decreased ability to perform general physical activities. There was a clear correlation between respiratory tract infection symptoms and pulmonary function. Patients that tested for abnormal pulmonary function had a higher percentage respiratory tract infection symptoms when compared with the group of patients with normal pulmonary function (P = 0.047). Furthermore, patients with abnormal pulmonary function had a slightly, but not significant, greater influence on daily activities than normal pulmonary function (P = 0.188) ( Table 2) . Finally, ten patients were observed to have greater than three abnormal pulmonary function indices, manifesting as respiratory tract infection symptoms and resulting in decreased general physical activates.
Using the Modified Medical Research Council Dyspnea Scale, scores of four (two cases), three (four cases), two (14 cases), one (four cases) and zero (two cases) were observed for patients with abnormal pulmonary function. Similarly, for patients with normal pulmonary function, scores of four (two cases), three (two cases), two (eight cases), one (ten cases) and zero (two cases) were observed. There were no significant differences in pulmonary function and ARDS scores between patients with abnormal pulmonary function and patients with normal pulmonary function. In addition, there were no significant differences in total hospital days and poorest oxygenation index between patients with normal pulmonary function and patients with abnormal pulmonary function. Taken together, these result do not represent a correlation between pulmonary function at one year after discharge and the severity of the initial influenza infection (Table 3 ).
A common severe clinical manifestation of patients infected with influenza A virus subtype H1N1 is severe ARDS [5] . During recovery, pulmonary fibrosis is the major pathological change observed during recovery [4] . In addition, abnormal pulmonary function is manifested as decreased diffusion function and restrictive ventilatory disorder [6] . There is precedence for long term negative effects from pulmonary infection, as viral pneumonia-caused ARDS is a typical manifestation of severe acute respiratory syndrome (SARS) infections. Specifically, SARS patients presented with decreased pulmonary diffusion function during recovery [7] [8] [9] [10] . Furthermore, a study by Neff et al [11] revealed that among 16 survivors of severe ARDS, 9 had abnormal pulmonary function, while four presented with obstructive ventilatory disorder and four with restrictive ventilatory disorder. In addition, a study by Li et al [12] found the incidence of obstructive ventilatory disorder and restrictive ventilatory disorder was approximately 30% following infection. Interestingly small airway dysfunction was also reported in a small number of SARS patients during recovery [8] . This is the first study to assess the long term effects of mild influenza A virus subtype H1N1. Pulmonary diffusion disorder during H1N1 influenza infection recovery is similar to ARDS, however, a large proportion of patients recovering from influenza infection also show signs of small airway obstruction. In addition, this study reveals that approximately half of patients recovering from H1N1 influenza had abnormal pulmonary function, one third had diffusion dysfunction, a third had small airway obstruction, and another third presented with decreased ventilation function. The pathological changes following H1N1 influenza-induced severe pneumonia include three types: diffuse alveolar lesion, necrotizing bronchiolitis and widespread pulmonary hemorrhage [13] . This suggests that necrotizing bronchiolitis is likely to be the pathological basis of small airway obstruction. Here, we found 25% of patients had respiratory tract infection symptoms including cough, expectoration, or gasping, while 41.7% of patients had difficulties in performing general physical activities. Interestingly, the observed clinical symptoms correlated with patients having greater than three abnormal pulmonary function indices. In this study, we did not identify a relationship between abnormal pulmonary function of patients with H1N1 influenza and the severity of pulmonary function impairment during hospitalization, possibly because mild H1N1 influenza patients and a small number of H1N1 influenza patients were involved. This is consistent with previous studies investigating the effects of ARDS on pulmonary function [14, 15] . Several variables were not included in this study that may have also had an effect on the recovery of pulmonary function following influenza infection, including age, obesity, gender, recovery time, heart function, and the amount of physical rehabilitation exercise.
While some patients still have respiratory tract infection symptoms and limited physical activity one year after recovering from H1N1 infection. While no correlations were drawn between severity of infection and these symptoms, care should be paid to these patients, including follow-up pulmonary function tests to guide patients to the proper rehabilitation treatment, with the ultimate goal of improving patients' quality of life.
|
O lder adults bear a disproportionate burden of hospitalization and mortality due to COVID-19. They are also at risk for unjust treatment by healthcare resource allocation frameworks under conditions of resource scarcity. Early in the pandemic, age-based cutoffs for resource allocation were proposed and reportedly implemented in Italy. 1 In the United States, the Office for Civil Rights of the Department of Health and Human Services reached resolutions with several states to revise crisis standards of care that had included age-based cutoffs. 2 These cutoffs have largely been eliminated from state crisis standards of care; however, they may be reappearing in decisions about allocation of other potentially scarce medical resources, such as vaccines.
In September 2020, the National Academy of Sciences, Engineering, and Medicine (NASEM) released its Discussion Draft of the Preliminary Framework for Equitable Allocation of a COVID-19 Vaccine. 3 The draft framework appropriately relies on six basic principles: maximizing reductions in mortality and morbidity, mitigating health inequities, giving equal regard to each individual, setting allocation criteria fairly, ensuring that criteria are evidence based, and communicating with the public about the criteria in a transparent manner. 3 It also appropriately recognizes that decisions about vaccine allocation must be responsive to circumstances. 3 Under present circumstances, the draft framework recommends prioritizing those at highest risk of becoming infected and experiencing serious outcomes, those in essential social roles, and those at greatest risk of transmitting the virus to others. 3 At the same time, the draft framework reintroduces reasoning about age that is ethically problematic. When both younger and older persons are equally at risk, the draft framework recommends prioritizing the younger person for vaccination. 3(p40) Underlying this type of age-based tiebreaker are frameworks referred to in ethics as "life-years saved" and "fair innings."
Even when used as a tiebreaker, moving from rationing based on immediate reductions in mortality and morbidity to rationing based on a life-years saved framework raises ethical concerns. The Office for Civil Rights judged as discriminatory any reliance on "years of life saved" to decide how resources are allocated to population groups. 2 It observed that such rationing treats individuals based solely on the category within which they fall, rather than on individualized assessments of their likelihood of survival, and it also reasoned that age cutoffs should never be used to exclude people from life-saving treatments, such as ventilators. 2 In acute care settings, multiprinciple allocation frameworks that equally weigh in-hospital survival (using tools such as the Sequential Organ Failure Assessment) and severe comorbidities contributing to short-term mortality should be the primary allocation method when resources are limited. 4 Moreover, age is not a particularly good proxy for lifeyears saved. The life-years saved concept assumes the ability to accurately prognosticate long-term life expectancy; however, long-term predictions of life expectancy are notoriously unreliable. The life-years saved approach, in short, obscures the heterogeneity of older adults with respect to their health status and other individual characteristics. 4 The so-called "fair innings" argument, which favors younger age groups because they have lived fewer life-years, has also been used to justify resource allocation based on age. 5 The fair innings argument is intuitively appealing because it seems unfair that younger people should die without having the opportunity to live through various stages of life. However, this argument also rests on ethically problematic assumptions, two of which we describe here.
First, the fair innings argument assigns greater value to earlier rather than to later stages of life. If the short-term (i.e., <6 month) prognoses of a younger adult and an older adult are identical, the fair innings argument would still favor allocating a limited healthcare resource to the younger adult based on his/her being at an earlier stage in life. This assumption-that earlier stages of life are more valuable-may reflect ageism.
Second, the fair innings argument does not account for factors, such as racism, disparate access to health care, and economic inequality, that are associated with decreased life spans and thus fewer "innings." These factors call attention to many complex reasons why innings may be judged unfair that do not rest simply on whether some persons have had fewer innings than others.
A final version of the NASEM report, "Framework for Equitable Allocation of COVID-19 vaccine," 6 was released in October 2020. This document incorporated feedback from the public, including oral and written testimony from the American Geriatrics Society (AGS). Importantly, NASEM distanced the guidelines from the previous focus on life-years saved and instead focused on avoidance of death, 6(pp3-11) citing concerns about ageism that had been raised in the AGS testimony. However, NASEM did not exclude the possibility of reverting to the life-years saved argument in situations where younger adults have disproportionately high mortality from a pandemic. 6 We commend NASEM for deemphasizing the life-years saved approach in its final COVID-19 vaccine allocation framework. We also urge NASEM and other groups, including Centers for Disease Control and Prevention and Advisory Committee on Immunization Practices, to avoid reverting to the life-years saved argument in the future given its inherent ageism.
Some resource allocation strategies cite the instrumental value of certain groups, such as essential workers, including hospital and nursing home staff, firefighters, and the police, as priorities for scarce resources. One approach argues against prioritizing older adults with fewer remaining lifeyears to receive a COVID-19 vaccine because "advanced age reduces likelihood of working in high-transmission settings or being an essential caregiver." 7 As with other efforts to insert valuation-based metrics, this approach has significant limitations. As the pandemic continues, we are increasingly aware that the definition of who is "essential" inappropriately excludes many others, such as caregivers, teachers, scientists, delivery drivers, journalists, and grocery store and plant workers. Likewise, society often underestimates the essential contributions of older adults in discussions of instrumental value within resource allocation strategies. For example, grandparents often care for grandchildren and hold together family units. Adults older than 65 years comprised 19% of the caregivers for adults aged 18 years or older. 8 Grandparents may also take on full-time parenting responsibilities for children whose parents have died or are otherwise unavailable. Given advances in longevity, older adults serve in critical leadership roles throughout government, public health, and business; provide philanthropic support; and serve as mentors to younger adults.
When faced with potential and painful shortages of healthcare resources, allocation decisions should be based on the most direct and immediate goal of minimizing immediate and short-term mortality and morbidity. Resource allocation strategies must be developed with multidisciplinary input, applied uniformly and transparently, and subjected to regular and rigorous review to ensure equitable and unbiased implementation and to remove any ageist provisions. Furthermore, a postpandemic review of resource allocation strategies that were actually implementedincluding strategies to allocate a COVID-19 vaccineshould be conducted to ensure that unjust resource allocation strategies do not persist. 4 By adopting these approaches, it will be possible to ensure that no group is unjustly disadvantaged by resource allocation strategies under conditions of resource scarcity.
Some fans, assuming the game result, do not watch the later innings; however, just as many believe the later innings can be equally important.
|
With the rapid development of both computer hardware, software, and algorithms, drug screening and design have benefited much from various computational methods which greatly reduce the time and cost of drug development. In general, bioinformatics can help reveal the key genes from a massive amount of genomic data [1, 2] and thus provide possible target proteins for drug screening and design. As a supplement to experiments, protein structure prediction methods can provide protein structures with reasonable precision [3] . Biomolecular simulations with multiscale models allow for investigations of both structural and thermodynamic features of target proteins on different levels [4] , which are useful for identifying drug binding sites and elucidating drug action mechanisms. Virtual screening then searches chemical libraries to provide possible drug candidates based on drug binding sites on target proteins [5] [6] [7] . With greatly reduced amount of possible drug candidates, in-vitro cell experiments can further evaluate the efficacy of these molecules. In addition to virtual screening, de novo drug design methods [8] , which generate synthesizable small molecules with high binding affinity, provide another type of computer-aided drug design direction. Artificial intelligence, e.g., machine learning and deep learning, is playing more and more important roles in the aforementioned computational methods and thus drug development [9] [10] [11] . In this review, we will focus on developments of the last four computational methods as well as their applications in drug screening and design. predicted by the transition state theory. Hence, they used isomorphism between probabilities obtained from the MC process and probability factors obtained from transition state theory [36] , and converted the MC process to a time-dependent simulation with additional simplified modifications [37] .
The design, discovery, and development of drugs are complex processes involving many different fields of knowledge and are considered a time-consuming and laborious inter-disciplinary work [38] [39] [40] [41] . Different drug design methods and virtual screening will be very useful to design and find rational drug molecules based on the target macromolecule that interacts with the drug and thus speed up the whole drug discovery process. Here, we will discuss structure-based drug design, ligand-based drug design, and virtual screening.
Structure-based drug design must be performed with available structural models of the target proteins, which are provided by X-ray diffraction, nuclear magnetic resonance (NMR) or molecular simulation (homologous protein modeling, etc.) [42] [43] [44] [45] [46] . Keeping in mind the complexity of cancers which show diverse phenotypes and multiple etiologies, a one-size-fits-all drug design strategy for the development of cancer chemotherapeutics does not yield successful results. Lately, Arjmand et al. [47] adopted a series of methods, such as the combination of X-ray crystal structures and molecular docking, to design, synthesize, and characterize novel chromone based-copper(II) antitumor inhibitors. In general, after obtaining the structure of the receptor macromolecule by x-crystal single-crystal diffraction technique or multi-dimensional NMR, molecular modeling software can be used to analyze the physicochemical properties of drug binding sites on the receptor, especially including electrostatic field, hydrophobic field, hydrogen bond, and key residues. Then, the small molecule database is searched, or the drug design technique is used to identify the suitable molecules whose molecular shapes match the binding sites of the receptor and binding affinity is high. Then, these molecules are synthesized and their biological activities will be tested for further drug development. In short, structure-based drug design plays an extremely important role in drug design.
Unlike structure-based drug design, ligand-based drug design doesn't search small molecule libraries. Instead, it relies on knowledge of known molecules binding to the target macromolecule of interest. Using these known molecules, a pharmacophore model that defines the minimum necessary structural characteristics a molecule must possess in order to bind to the target can be derived [48, 49] . Then, this model can be further used to design new molecular entities that interact with the target. On the other hand, ligand-based drug design can also use quantitative structure-activity relationships (QSAR) [50, 51] in which a correlation between calculated properties of molecules and their experimentally determined biological activity is derived, to predict the activity of new analogs. Both the pharmacophore model and QSAR model will be discussed in detail in the following sessions.
In recent years, the rapid development of computational resources and small molecule databases have led to major breakthroughs in the development of lead compounds. As the number of new drug targets increases exponentially, computational methods are increasingly being used to accelerate the drug discovery process. This has led to the increased use of computer-assisted drug design and chemical bioinformatics techniques such as high-throughput docking, homology search and pharmacophore search in databases for virtual screening (VS) technology [51] . Virtual screening is an important part of computer-aided drug design methods. It may be the cheapest way to identify potential lead compounds, and many successful cases have proven successful using this technology.
The primary technique for identifying new lead compounds in drug discovery is to physically screen large chemical libraries for biological targets. In experiments, high-throughput screening identifies active molecules by performing separate biochemical analysis of more than one million compounds. However, this technology involves significant costs and time. Therefore, a cheaper and more efficient calculation method came into being, namely, virtual high-throughput screening. The method has been widely used in the early development of new drug. The main purpose is to determine the novel active small molecule structure from the large compound libraries. It is consistent with the purpose of high-throughput screening. The difference is that virtual screening can save a lot of experimental costs by significantly reducing the number of compounds for the measurement of the pharmacological activity, while high-throughput screening needs to perform experiments with all compounds in the database. Here, we will discuss common methods of virtual screening.
Molecular docking, which predicts interaction patterns between proteins and small molecules as wel as proteins and proteins, to evaluate the binding between two molecules [52] , is widely used in the field of drug screening and design. The theoretical basis is that the process of ligand and receptor recognition relies on spatial shape matching and energy matching, which is the theory of "inducing fit". Determining the correct binding conformation of small molecule ligands and protein receptors in the formation of complex structures is the basis for drug design and studying its action mechanism. Molecular docking can be roughly divided into rigid docking, semi-flexible docking and flexible docking. In rigid docking, the structure of molecules does not change. The calculation method is relatively simple, and mainly studies the degree of conformation matching, so it is more suitable for studying macromolecular systems, such as protein-protein, protein-nucleic acid systems. In semi-flexible docking, the conformation of molecules can be varied within a certain range, so it is more suitable to deal with the interaction between proteins and small molecules [53] . In general, the structure of small molecules can be freely changed, while macromolecules remain rigid or retain some of the rotatable amino acid residues to ensure computational efficiency. In flexible docking, the simulated system conformation is free to change, thus consuming more computing resources while improving accuracy. What's more, the establishment of binding sites in molecular docking methods is very important. For the first time, Collins [54] successfully determined the binding sites on the surface of proteins using a multi-scale algorithm and performed flexible docking of molecules, which greatly promoted the development of molecular docking.
A pharmacophore is an abstract description of molecular features necessary for molecular recognition of a ligand by a biological macromolecule, which explains how structurally diverse ligands can bind to a common receptor site. When a drug molecule interacts with a target macromolecule, it produces a geometrically and energetically matched active conformation with the target. Medicinal chemists found that different chemical groups in drug molecules have different effects on activity, and changes to some groups have a great influence on the interaction between drugs and targets, while others have little effect [55] . Moreover, It was found that molecules with the same activity tend to have some of the same characteristics. Therefore, in 1909, Ehrlich proposed the concept of pharmacophores, which referred to the molecular framework of atoms with active essential characteristics [56] . In 1977, Gund further clarified the concept of pharmacophores as a group of molecules that recognize receptors and form structural features of molecular biological activity [57] .
There are two main methods for the identification of pharmacophores. On one hand, if the target structure is available, the possible pharmacophore structure can be inferred by analyzing the action mode of receptor and drug molecule. On the other hand, when the structure of the target is unknown or the action mechanism is still unclear, a series of compounds will be studied for pharmacophores, and information on some groups that play a key role in the activity of compound will be summarized by means of conformational analysis and molecular folding [58] . Active compound that is suitable for constructing the model will be selected in the pharmacophore recognition process. Then, conformation analysis is used to find the binding conformation of molecule, and to determine the pharmacophore [59] . In recent years, with the development of compound databases and computer technology, the virtual screening of databases using the pharmacophore model has been widely used, and has become one of the important means to discover lead compounds.
QSAR is a quantitative study of the interactions between small organic molecules and biological macromolecules. It contains a correlation between calculated properties of molecules (e.g., absorption, distribution, metabolism of small organic molecules in living organisms) and their experimentally determined biological activity [51] . In the case of unknown receptor structure, the QSAR method is the most accurate and effective method for drug design. Drug discovery often involves the use of QSAR to identify chemical structures that could have good inhibitory effects on specific targets and have low toxicity (non-specific activity). With the further development of structure-activity relationship theory and statistical methods, in the 1980s, 3D structural information was introduced into the QSAR method, namely 3D-QSAR. Since 1990s, with the improvement of computing power and the accurate determination of 3D structure of many biomacromolecules, structure-based drug design has gradually replaced the dominant position of quantitative structure-activity relationship in the field of drug design, but QSAR with the advantages of small amount of calculation and good predictive ability [60] still plays an important role in pharmaceutical researches.
Based on 3D structural characteristics of ligands and targets, 3D-QSAR explores the 3D conception of bioactive molecules, accurately reflects the energy changes and patterns of interactions between bioactive molecules and receptors, and reveals the drug-receiving mechanism of body interactions. The physicochemical parameters and 3D structural parameters of a series of drugs are fitted to the quantitative relationship. Then, the structures of new compounds are predicted and optimized. In short, 3D-QSAR is actually a research method combining QSAR with computational chemistry and molecular graphics. It is a powerful tool for studying the interactions between drugs and target macromolecules, speculating the image of simulated targets, establishing the relationship of drug structure activity, and designing drugs.
Computer-based de novo design methods of drug-like molecules are mainly for generating small molecule compounds with ideal physicochemical and pharmacological properties. In the past decades, fragment-based drug discovery had appeared as a novel concept that has proved a good prospect for improving lead optimization, in order to decrease the clinical attrition rates in drug design. It is an approach that uses small molecular fragments to deduce the biomolecular targets [61] . Fragment-based de novo design has obtained the long-term clinical success [62] .
Despite the fact that modern drug discovery has made some successes in offering effective drugs, drug design has been affected by several factors, such as the tremendous chemical space for exploring drug molecules [63] . Further, as a large number of data increase in biological, chemical, and clinical medicine, it is obvious that the drug design should be solved with multiscale optimization methods, and concentrate on the data beyond molecular levels [64] . Thus, it is essential to discuss the function of multiscale models in drug discovery, and how they have predicted multiple biological properties in different biological targets. Accordingly, we discuss the combined application of both the concept of fragment-based on de novo design and multiscale modeling.
The fragment-based de novo design method starts with small building blocks. The initial molecular building blocks with desired properties are either elaborated upon (growing), directly connected (joining), or connected by a linker (linking). This process can be iterated until one or more molecules with the desired properties are obtained. There are two methods, namely structure-based and ligand-based methods [65] . Structure-based de novo design method searches novel ligands by using the 3D structural information of the protein target, which are usually constructed directly in the target protein binding site and evaluated by calculating the interaction energy of the target protein with ligands. Nevertheless, in de novo ligand design method, the molecule structure of protein target is unknown, and the new molecule is suggested based structure analogous to the known ligand molecule.
Nowadays, proteochemometrics has emerged as a relatively new discipline for drug discovery. In this filed, QSAR analysis is a powerful tool for the efficient virtual screening, which shows physicochemical properties of various compounds. Compared to the classical QSAR, the QM calculations use reactivity descriptors in ligand-based QSARs, which provides an implicit model and calculate an exact enthalpy contributions of protein-ligand interactions. However, for the ab initio fragment, molecular orbital calculation in the structure-based QSARs, which obtains an explicit model and a clear enthalpy, changes the binding energy in different additional conditions. Moreover, it also calculates the free energy contribution of ligand-target complexes formation in structure-based and ligand-based QSAR models. Using a large number of ligand-target complexes to discuss the change of their binding affinity, more accurate optimization steps can be conducted based on good prediction and interpretation models [66] . The key of any QSAR model is how to accurately describe the molecule, and QM approaches provides a better understanding to the molecular and structural characteristics of ligands and drugs, so as to solve the problems existed in drug discovery.
The QSAR models of different scales are built according to the different computational precision, multiscale-QSAR research object mainly refers to the structure description of the training set, and involves small molecules and macromolecules [67, 68] . In micro-, meso-and macroscopic scales, different molecular approaches will be used. QM approaches are often used to perform precise calculations at microscales, such as atom-based QSAR. Molecular force field focuses on mesoscopic-scale simulation, such as fragment-based QSAR. And coarse-grained study mainly performed in the macroscopic scales, such as macromolecule-based QSAR and cell-based QSAR. Moreover, multi-scale can also be reflected by different dimensions used in different QSAR models. CoMFA is a technique of 3D fragment-based QSAR, which can complete skeleton transition and R-groups substitution, providing different structures for new drug design. Besides, derived from proteochemometrics (PCM) [69] , the 2.5D kinochemometrics (KCM) approach using 3D descriptor for protein kinases and 2D fingerprints for ligands can greatly increase the efficiency as well as the precision compared the traditional 3D QSAR methods [70] . The multiscale QSAR provides effective predictions for drug design, which integrates QSAR more systematically and applies all existing QSAR methods effectively.
Multiscale de novo drug design is a novel concept that combines QSAR models, QM calculations [71] and fragment-based drug discovery (FBDD) [72] . Here, the importance of explicit molecular descriptors is shown in a model from a molecular structural point of view through QM calculations. With the assembly of reasonable molecular fragments, the objective of drug design method is to produce a certain novel molecule that display highest biological activity, absorption, metabolism, elimination (ADME) and lowest toxicity properties at different environments, which belong to the application range of QSAR models. The multiscale de novo drug design methods can efficiently handle a large amount of biochemical/clinical data and obtain the chemical characteristics in order to improve the properties of the drug molecule. It is considered to be a more effective and safer method to discover new therapeutic agents.
In the process of drug discovery, machine intelligence methods have mostly been used in the above-mentioned computational methods over the past few decades [73] . With the booming era of "big" data, machine learning methods have developed into deep learning approaches, which are a more efficient way for drug designers to deal with important biological properties from large amount of compound databases. Here, we introduce applications of machine learning methods in QSAR analysis as well as the recent advances in deep learning methods.
Decision trees (DTs) are a simple, interpretable and predictive machine learning method. Ordinarily, there are two fundamental steps, that is, selecting properties and pruning for the decision trees building. The selected properties are considered as internal nodes, the branch representing the test result on the molecule, and the leaf node as a classification label. In order to avoid the complexity of the decision tree, the pruning program is used to prune the established tree. The DT is a typical classification algorithm, which is widely used in the prediction and auxiliary diagnosis of the disease, such as management decision-making, the classification mode for creating the metabolic disorder, and data mining of diabetes etc. [74] . Abdul et al. [75] developed a task-based chemical toxicity prediction framework, and used a decision tree to obtain an optimum number of features from a collection of thousands of them, which effectively help chemists perform prescreening of toxic compounds effectively.
The artificial neural network (ANN) achieves problem-solving by mimicking brain function. Just as the brain applies information obtained from past experience to solve new problems, a neural network can construct a system of "neurons" that reaches new decisions, classifications, and prediction based on previous experiences. The processing element is similar to a neuron, and a massive processing element is organized by the layers. They include three types: input, hidden, and output layers. ANN benefits from high self-organization, robustness, and fault tolerance, and has been widely applied in prognosis evaluation and early prevention of diseases. Lorenzo et al. [76] used the interpretable ANN to predict biophysical properties of therapeutic monoclonal antibodies, include melting temperature, aggregation onset temperature and interaction parameters as a function of pH and salt concentration from the amino acid composition. Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing ideas from the deep learning field. Compared with some other fields in life sciences, their applications in drug discovery is still limited.
The support vector machine (SVM) is one of the most promising machine learning methods that can use molecular descriptors to construct a predictive QSAR models and deal with high-dimensional datasets. ANN and multiple linear regression analysis were used to construct linear and nonlinear models, which were then compared with the results gained by SVM. For linear models, the SVM approaches use space mapping points to separate different classification for maximizing the range between different categories of points [77] . Further, for the nonlinear models, SVMs use nucleus mapping to transform into a high-dimension space for linear classification. At present, the SVM approach has been widely used in modeling at different scales for drug discovery [78] .
The k nearest neighbor (kNN) is one of the simplest and most intuitive algorithms among all machine learning methods, and is usually jointly used with other selection algorithms in the feature space. Further, it is used for classification and regression based on example learning. Normally, molecules are classified by votes of its closest neighbors, resulting in the most common class that molecules are distributed to its closest neighbors. Here, the value of k is the number of closest neighbors. Based on ligand-based virtual screening, kNN can be viewed as a prolongation form chemical similarity searching to supervised learning, and the top search results predicted the best bioactivities. Weber et al. [79] tried two machine learning algorithms of classification (KNN and RF) to analyze genotype-phenotype datasets of HIV protease and reverse transcriptase (RT). As a result, both algorithms had high accuracies for predicting the drug resistance for protease and RT inhibitors.
The random forest (RF) is an ensemble learning approach involving the building of multiple DTs based on the training examples. Similar to kNN algorithms, it is also used to for classification and to predict regression [80] . Compared to DTs, it is impossible that RF over-fits the data, and the RF has been used for bioactivity data classification [81] , toxicity modeling [82] , and drug target prediction [83] , etc. Wang et al. [84] used the RF approach to model the binding affinity of protein-ligand on 170 HIV-1 proteases complexes, 110 trypsin complexes, and 126 carbonic anhydrase complexes, which demonstrated that individual representation and model construction for each protein family is a more reasonable way in predicting the affinity of one particular protein family.
Currently, the multiscale models can predict toxicity, activity, and ADME properties of different proteins and microbial targets by integrating different genomes and proteomics. Cheminformatics has played an important role in rationalize drug discovery. The QSAR model has become the main auxiliary tool which can achieve virtual screening of various pharmacological characteristics. Although the QSAR model has been widely used in the search and design of new drug, classical QSAR models can only predict the activity and toxicity of one biomolecule against one certain target. However, the multi-target QSAR (mt-QSAR) can be used to carry out rational drug design at multiple targets, which provides a better way to understand various pharmacological characteristic molecules including antibacterial activity and toxicity. Furthermore, uniform multitasking models based on quantitative structure biological effect relationship (mtk-QSBER) have been used in a lot of researches. These models were built by ANN and the topological indices, which can predict the biological activity and toxicity correctly and classify the compounds in experimental conditions. Meanwhile, these models used perturbation models to form structural-activity relationships between the site of infection and the drug, such as the PTML model [85] and the ChEMBL model [86] , which has been applied in infectious diseases [71] , immunology [85] , and cancer [87] widely. Currently, the mtk-QSBER model has been able to carry out the in-silico design and virtual screening of an antibacterial drug efficiently, and these antibacterial drugs have good biosafety. These methods have provided a powerful tool for in silico screening reasonable drugs.
The deep learning network is a concept closely related to ANN, which are learning of the concept of layering. In other words, it is a multiple learning approaches ranging from low to high levels. Just when the molecular descriptors are not selected, the deep learning method will automatically select representations from original data and high-dimensional data [88] . Therefore, it allows deep learning to be applied to the model building of drug discovery [89] . The convolutional neural networks (CNN) are most commonly used, which have made great progresses in the computer vision community [90] , and been applied in the drug design fields including de novo drug molecule identification, protein engineering and gene expression analysis. With the rapid development of deep learning concepts such as CNN, the molecular modeler's tool box has been equipped with potentially game-changing methods. Judging from the success of recent pioneering studies, we are convinced that modern deep learning architecture will be useful for the coming age of big data analysis in pharmaceutical research, toxicity prediction, genome mining and chemogenomic applications, to name only some of the immediate areas of application. Kiminori et al. [91] developed a fundamental technology that can predict the resistance of free cancer cells to fluorinated pyrimidine anticancer drugs by deep learning from the morphological image data taken from images. Cai et al. [92] developed a deep learning approach, termed deep human ether-a-go-go-related gene (hERG), for the prediction of hERG blockers of small molecules in drug discovery and post-marketing surveillance. The group found that deephERG models built by a multitask deep neural network (DNN) algorithm outperformed those built by single-task DNN, SVM and RF. Now, the drug development technologies usually include artificial intelligence-based (AI-based) techniques. Most AI applications only concentrate on limited tasks. Moreover, current AI can only direct patients' specific problems, it cannot make subjective inferences like doctors with the overall physical context of a patient. As a subfield of AI, ML can be successfully used for training in the quality of examples. However, this process is very time-consuming and costly. The development of ML techniques and the application of existing algorithms to process massive amounts of digital data resulted in higher requirements for computer hardware, which also increases the clinical cost. DL, which is also a subset of ML, and can process big data and create patterns by layers of neurons. However, it is difficult to understand how each decision is obtained by algorithm. ML methods have achieved great successes in the field of chemoinformatics to design and discover new drugs. An important innovation is the combination of ML methods and big data analysis to predict more extensive biological features. It is vital to discover more secure and efficient drugs by integrating structural, genetic information and pharmacological data from the scale of molecular to organism [93] . In addition, DL approaches have proven to be a promising way for efficiently learning from a large variety of datasets for modern novel drug discovery.
Multiscale modeling of the drugs in an excitable system is critical because experiments on a single system scale cannot reveal the underlying effects of multiple drug interactions. A computationally based approach to predict the emergency effects of drugs on excitatory rhythms may form an interactive technology-driven process for the drug and disease screening industry, research and development academia, and patient-oriented medical clinic. There are potentially far-reaching implications because millions of people affected by arrhythmia each year will benefit from improved risk stratification of drug-based interventions.
Much progress has been made in developing multiscale computational modeling and simulation approaches for predicting effects of cardiac ion channel blocking drugs. Structural modeling of ion channel interactions with drugs is a critical approach for current and future drug discovery efforts. Modeling of drug receptor sites within an ion channel structure can be useful to identify key drug-channel interaction sites. Drug interactions with cardiac ion channels have been modeled at the atomic scale in simulated docking and MD simulations, as well as at the level of channel function to simulate drug effects on channel behavior [32, [94] [95] [96] [97] [98] [99] [100] . Structural modeling of drug-channel interactions at the atomic scale may ultimately allow for the design of novel high-affinity and subtype selective drugs for specific targeting of receptors for cardiac and neurological disorders.
The World Health Organization (WHO) stated that cancer remains one of the most dangerous diseases today. Considering that cancer is a multifactorial disease, there is increasing interest in multi-target compounds that can target multiple intracellular pathways. However, the study of large data sets for the analysis of anticancer compounds is difficult, with a large amount of data and high data complexity. For example, the ChEMBL database [101] compiles big datasets of very heterogeneous preclinical assays. Bediaga et al. [87] have reported a PTML-LDA model of the ChEMBL dataset for the preclinical determination of anticancer compounds. PTML is a model that combines perturbation theory (PT) ideas and ML methods to solve similar problems. They compared this model with other PTML models which was reported by Speck-Planche et al. [72, [102] [103] [104] and then concluded that this is the only one that can predict activity against multiple cancers. Speck-Planche et al. also derived a multi-task (mtk) chemical information model combining Broto Moreau autocorrelation with ANN from a dataset containing 1933 peptide cases. This model is used to virtually design and screen peptides with potential anti-cancer activity against different cancer cell lines and low cytotoxicity to a variety of healthy mammalian cells, and the model shows greater than 92% in both training and prediction (test)accuracy.
In addition, due to the inherent complexity of tumors, it is necessary to analyze their growth at different scales. It includes many phenomena that occur at various spatial scales from tissue to molecular length. The complexity of cancer development is manifested in at least three scales that can be distinguished and described by mathematical models, namely microscale, mesoscale, and macroscale. Wang et al. conducted a number of studies on how to use multiscale models for the identification and combination therapy of drug targets [105] [106] [107] [108] [109] [110] [111] . This method is based on quantification of relationship between intracellular epidermal growth factor receptor (EGFR) signaling kinetics, lung cancer extracellular epidermal growth factor (EGF) stimulation and multicellular growth. The multiscale modeling of tumors combined with systemic pharmacology will contribute to the development of practical smart drugs. It will produce a comprehensive system-level approach to determine the dynamics and effects of existing and new drugs in preclinical trials, model organisms and individual patients. In addition, mathematical and computational studies will provide a better way to understand many factors that influence the effects of drugs, thus helping to uncover better ways to therapeutically interfere with disease.
Multiscale models can be also used to identify pathophysiological processes to allow disease staging. In many cases, like cancer, treatments vary depending on the stage of disease. The model can help determine prognosis, which is an important clinical determination that can help determine the right type of medication to be administered or discovered. Several models focus on the neuronal network levels, including Cutsuridis and Moustafa for Alzheimer's disease [112] , and Lytton for epilepsy [113] . ANN is a class of ML techniques that can be used for clinical analysis of big data including that related to drug testing, which is critical for drug discovery. In addition, Anastasio [114] introduced process algebra, a computer technology widely used to analyze complex computing systems, used here to calculate neurology. Sirci et al. [115] described how network (map) theory is used to identify similarities and differences between different pharmacological agents. In this type of study, each drug is a node, and the edges between drugs represent chemical and transcription-based interactions that characterize the drug.
In addition, Ferreira da Costa et al. [116] report the first PTML (PT + ML) study of a large number of ChEMBL datasets for preclinical determinations of compounds for dopamine pathway proteins. Molecular docking or ML models can be used to solve a specific protein, but these models cannot explain the large and complex large data sets of preclinical assays reported in public databases. PT models, on the other hand, allow us to predict the properties of a query compound or molecular system in an experimental analysis with multiple boundary conditions based on previously known reference cases. In their work, the best PTML model found in the training and external validation series has an accuracy of 70-91%. Hansch's model is a classic method for solving quantitative structural binding relationships (QSBR) in pharmacology and medicinal chemistry. Abeijon et al. [117] developed a new PT-QSBR Hansch model based on PT and QSBR methods for a large number of drugs reported in ChEMBL, focusing on a protein expressed in the hippocampus of the brain of Alzheimer's disease (AD) patients. Now, by decomposing how risks and causes are combined in complex systems to produce disease, and how to prevent or improve these diseases through multi-stage, multi-target, multi-drug techniques, multiscale modeling is gradually being grasped.
From AIDS, hepatitis C, influenza, and other disease-related viruses to the current 2019-nCoV, we have been working hard to develop antiviral drugs targeting them. However, the unique structure and proliferation of the virus pose a natural challenge for drug development. Viruses do not have their own cellular structure and metabolic system, and must replicate and proliferate in host cells. Therefore, it is difficult to find compounds that target only viral targets without affecting the normal function of host cells. At present, the main way that some antiviral drugs work is to inhibit viral replication. However, many of the tools used for virus replication come from human cells, such as ribosomes, and the corresponding antiviral drugs will also bring great side effects to the human body. Therefore, the discovery of drugs requires the introduction of a multi-scale model to screen out drugs that can inhibit viral replication while reducing the damage to the human body.
So far, retroviral infections, such as HIV, are incurable diseases. ChEMBL manages big data capabilities through complex datasets, which make the information difficult to analyze because these datasets describe numerous features for predicting new drugs for retroviral infections. Without proper model, it is impossible to make full use of these features. Hence, Vásquez-Domínguez et al. [118] proposed a PTML model for the ChEMBL dataset, which can be efficiently used for preclinical experimental analysis of antiretroviral compounds. The PT operator is based on a multi-conditional moving average, which combines different functions and simplifies the management of all data. The PTML model they proposed was the first to consider multiple features combined with preclinical experimental antiretroviral tests. In order to simultaneously explore antibacterial activity against Gram-negative pathogens and in vitro safety related to absorption, distribution, metabolism, elimination, and toxicity (ADMET), the Speck-Planchee et al. [119] further proposed the first mtk-QSBER model. The accuracy of this model in both the training and prediction (test) sets is higher than 97%. They also have developed a chemoinformatic model for simultaneous prediction of anti-cocci activities and in vitro safety [71] . The best model displayed accuracies around 93% in both training and prediction (test) sets. Additionally, focusing on exploring anti-hepatitis C virus (HCV), the accuracy shown in the training and prediction (test) sets is higher than 95% using this model [120] . Cytotoxicity is one of the main concerns in the early development of peptide-based drugs. Kleandrova et al. [121] introduce the first multi-task processing (mtk) computational model focused on predicting both antibacterial activity and peptide cytotoxicity. Gonzalez-Diaz et al. [122] developed a model called LNN-ALMA to generate complex networks of the AIDS prevalence with respect to the preclinical activity of anti-HIV drugs.
Multiscale models are also imperfect and have their limitations. Models are expressions and simplifications of real life. No model can represent everything that can happen in the system. All models contain specific assumptions, and models vary widely in their comprehensiveness, quality, and utility. In other words, each model can only solve limited problems. Hence, we need to integrate different computational models and data in order to make full use of these models.
Computational methods have come to play significant roles in drug screening and design. Multiscale biomolecular simulations can help identify the drug binding sites on the target macromolecules and elucidate the drug action mechanisms. Virtual screening can efficiently search massive chemical databases for lead compounds. De novo drug design provides alternative powerful way to design drug molecules from scratch using building blocks summarized and abstracted in previous successful drug discovery. ML is revolutionizing most computational methods in drug screening and design, which may greatly improve the efficiency and precision for the big data era. As we frequently emphasize, different models or efficient algorithms (e.g., dimensionality reduction) need to be integrated properly to achieve the comprehensive study of biological processes at multiple scales as well as accurate and effective drug screening and design. The integrated computational methods will accelerate drug development and help identify effective therapies with novel action mechanisms that can ultimately be applied to a variety of complex biological systems.
The authors declare no conflict of interest.
|
Domestic wastewater, raw or with treatment, is a complex, patchy matrix of organic material containing a diverse microbial community of bacteria, archaea, viruses, fungi, and parasites [1] . These microbes, which result from pathogen shedding due to active infection, host excretion of native gut flora, and passive transport of dietary microbes, vary in composition and concentrations based on geographic location, pathogen epidemiology, and animal host diversity [1] [2] [3] [4] . Metagenomic surveys of viral communities exemplify the high diversity and varying origins of microbes in wastewater, with sequences from over 50 different viral families recovered, as well as numerous sequences too divergent to classify [2, 5] . DNA metagenomes from wastewater are dominated by unknown sequences and bacteriophage sequences that likely represent viruses infecting gut bacteria, while RNA metagenomes contain eukaryotic viruses infecting plants (Virgaviridae family; likely of dietary origin), followed by viruses infecting humans, insects, and rats. While wastewater is dominated by microbes that are not pathogenic to humans, there are many types of viruses, bacteria, protozoan parasites, and helminth worms that can be transmitted via the fecal-oral route, are waterborne, and cause human infections of varying severity and symptoms (e.g. respiratory, encephalitis, gastroenteritis, hepatitis, dermatitis) ( Fig. 1A ; [1, 3, 4, 6] ).
Presently, norovirus and rotavirus (viruses), Salmonella and Campylobacter (bacteria), Ascaris (helminth), and Cryptosporidium (unicellular parasite) are among the most prevalent and widespread known human/zoonotic enteric pathogens [1, 3, 4, 6] . However, the emergence of additional waterborne pathogens (e.g. microsporidia, mycobacteria, parvoviruses, coronaviruses, picornaviruses) is of growing concern [7, 8] . The frequent lack of an identifiable etiological agent of gastrointestinal (GI) illness (reviewed in [9] ) and the prevalence of novel viruses dominating sewage [10, 11] further highlights the threat of additional pathogens present in wastewater to public health and economic productivity. Given the high incidence and concentration of pathogens in feces and wastewater, it is not surprising that the World Health Organization (WHO) recognizes acute gastroenteritis as one of the leading causes of human morbidity and mortality worldwide [12] . Furthermore, it is estimated that enteric viruses, particularly norovirus and rotavirus, are largely responsible for these infections and deaths due to their resistance to typical methods of disinfection, low infectious dose, and long persistence in the environment and on fomites [3, 4, 7, 13] . Since as few as 18 viral particles can cause infection, an individual infected with norovirus can excrete up to five billion infectious doses (e.g. 10 5 -10 11 viral copies/g feces) into wastewater [13] . Given the high impact of enteric viruses on public health, this commentary will focus on the relation of enteric viruses to current and emerging paradigms in water quality monitoring, current methods used for enteric virus detection, and the growing need for methodological improvements (Fig. 1 ).
The main sources of fecal pollution into surface waters include feces and shedding by wild and domestic animals, domestic wastewater, and direct shedding during human recreation (Fig. 1A) . According to the WHO, improvements in water management could alleviate 10% of the worldwide disease burden, resulting in net savings of approximately US$ 72.7 billion each year until Millennium Development Goals for improved sanitation and drinking water are met [10] . Although developing countries suffer disproportionately from enteric pathogen-associated mortality and morbidity, illness due to enteric infections still has significant impacts on health and socioeconomics in developed countries with advanced wastewater and drinking water treatment [12, 14] . Therefore, a primary, longstanding goal for protecting human health worldwide is accurate identification of the presence of wastewater pollution and prediction of risks to the public associated with contaminated drinking, shellfish harvesting, and recreational waters. Since the early 1900s, non-pathogenic enteric bacteria (FIB; e.g. Enterococcus, fecal coliforms, and Escherichia coli) have been used throughout the world as indicators for the presence of enteric pathogens and have been utilized for subsequent prediction of human health risk due to microbial pathogens ( Fig. 1B ; [14] [15] [16] [17] ). The traditional monitoring approach for recreational water management depends upon routine monitoring of FIB using culture-based methods and subsequent beach closures when FIB concentrations exceed allowable limits. FIB concentrations are also used to determine microbial quality of both treated and untreated wastewater and to calculate the associated health risks associated with reuse in agriculture [1] . For example, the 2006 WHO guidelines suggest using quantitative microbial risk assessments (QMRA), with an assumed ratio of 0.1-1 human norovirus or rotavirus per 10 5 E. coli, to calculate human health risk of viral infection from wastewater reuse in agriculture. Although this method is known to be flawed (see below), FIB are still widely used as an indicator of enteric pathogens and human health risks due to their consistent presence in wastewater and the readily available, low cost, culture-based methods for detection that require minimal laboratory training [14] [15] [16] [17] .
Despite their convenience and widespread use, it has been recognized since 1979 that FIB do not consistently correlate with the presence of enteric pathogens, particularly those that are not bacteria, or with human health risks in any water matrix (Table 1 ; [14] [15] [16] [17] [18] [19] [20] ). Even with the introduction of quantitative PCR (qPCR) to measure FIB, the lack of a consistent correlation between FIB and human health risks continues to hold true in the majority of studies. One exception is a recent study that demonstrated a correlation between qPCR-derived Enterococcus concentrations and the incidence of GI illness due to recreational activities at a beach exposed to pointsource wastewater pollution [18] . The general lack of correlation between FIB and enteric viruses is not surprising due to differences in stability and persistence of these microbes. In comparison to enteric viruses, FIB are more susceptible to wastewater and drinking Simple, Rapid Virus Detection Needed to Complement Current Technologies water treatment, are excreted by all warm-blooded animals, and have higher die-off rates in the environment, even though they can replicate and persist in the sediment after a contamination event (Fig. 1A) . Drinking water, shellfish harvesting areas, recreational water, and wastewater designated for reuse that are considered safe based upon FIB concentrations can actually contain high concentrations of human enteric viruses [10, 15-17, 20, 21] . Furthermore, FIB-based QMRAs for wastewater reuse have been shown to greatly underestimate the risk of norovirus illness; in one study, actual norovirus concentrations were 1000-fold greater than those predicted by the FIB-norovirus ratio [21] . The lack of an indicator to encompass all sources of fecal pollution and the heterogeneous distribution of FIB as well as their highly variable ratio with nonbacterial pathogens in wastewater and environmental waters has significantly hindered improvements in microbial safety related to drinking water, wastewater reuse, shellfish consumption, and recreational waters [14, 15, 22 ].
As a result of the inadequacy of FIB monitoring to determine human health risks associated with enteric pathogens, particularly viruses, alternative approaches to traditional microbial quality monitoring have been recommended (Fig. 1C) . These holistic approaches incorporate sanitary surveys that inform water quality studies, which directly quantify reference pathogens (e.g. norovirus, rotavirus) and drive exploratory QMRA and subsequent management decisions [6, 15, 19, 21, 23, 24] . Three main lines of evidence suggest that this approach will improve water quality monitoring efforts. First, the power of incorporating actual enteric virus measurements into exploratory QMRAs with multiple scenarios for driving site-specific microbial safety guidelines for recreational waters was recently demonstrated [15] . Similarly, the need to quantify enteric virus concentrations and resulting health risks was also highlighted to improve QMRAs for wastewater reuse and consequent public health guidelines [21] . Finally, the utility of incorporating specific enteric viruses and/or a viral indicator to identify wastewater pollution/poor microbial quality and to better predict human health risks related to wastewater exposure has been demonstrated throughout the world [14-16, 22, 24, 25] . Furthermore, improvements in microbial safety depend on improved treatment processes, enteric pathogen modeling, and QMRAs that directly measure reference enteric viruses and/or an improved viral indicator instead of relying upon FIB-to-model pathogen ratios [3, 15, 19-21, 23, 24] .
Due to the hypervariability of environmental persistence and epidemiology of individual enteric viruses, it is unlikely that a single viral indicator will be sufficient to represent all enteric viruses [7, 14, 22, 25] . Despite this challenge, viruses from the families Adenoviridae, Caliciviridae, Picornaviridae, and Reoviridae as well as the genera Anellovirus, Picobirnavirus, and Polyomavirus have been incorporated into water quality studies throughout the world [14, 16, 26] . Additionally, several different bacteriophages (viruses that infect bacteria), particularly F-specific RNA coliphages, have been proposed as indicators for enteric viruses (reviewed in [22] ); however, none reliably correlate with the presence of enteric viruses in the environment or throughout disinfection processes [16, 17, 20, 25] . The fairly consistent, relatively high concentrations of adenoviruses and polyomaviruses in wastewater make them possible indicator viruses (Table 1) ; however, their low concentrations in contaminated environments still present methodological difficulties for detection [3, 14, 16] . Metagenomic studies have shown the dominance of plant viruses (family Virgaviridae) in human feces and wastewater [2, 5, 11, 27] . Consequently, the use of a pepper plant pathogen, pepper mild mottle virus, has also been proposed as an alternative enteric virus indicator due to its high concentration in wastewater (average 10 6 particles/mL) and dietary (i.e., human infection-independent) origin in feces [14, 25, 26, 28] . Further research is needed to understand its correlation to infectious enteric viruses throughout different geographic regions, disinfection processes, and contamination scenarios.
Conventional methods for the detection and/or quantification of enteric viruses in environmental matrices are culture-or molecularbased and involve two key steps, virus concentration and target detection [3, 9, 26] . A variety of virus concentration techniques are available, optimized for different environmental matrices, which take advantage of the physiochemical properties of viruses (e.g. Relies solely on predictive modeling and QMRA to mitigate health risks; thus, could fail to identify contamination due to unexpected infrastructure failures Creation of effective QMRA and modeling is dependent upon sufficient data derived from expensive and sophisticated molecular methods adsorption/elution) and/or utilize particle size separation (e.g. filtration). The efficiency of virus isolation and concentration ranges widely, with values from 5 to 92% reported [26] . While culture-based methods can determine the concentration of infectious enteric viruses, they are expensive, laboratory intensive, yield delayed results, and fail to measure many waterborne viruses of concern (e.g. human norovirus, which has not yet been obtained in cell culture) [3, 26] . For these reasons, molecular methods, particularly those that are quantitative, have become increasingly popular (Fig. 1C) . The most common methods are amplification based and include: (reverse transcription) (RT-) qPCR and nucleic acid sequence-based amplification (NASBA) [29] .
Since exposure to just a few virus particles (e.g. norovirus) can cause illness, virus detection methods must be sensitive enough to detect low concentrations in the environment [3, 15, 16, 24, 29] . Given that an appropriate enteric virus indicator has not been identified to date, investigators rely largely on pathogen-specific molecular assays [14, 15, 24] . However, poor method sensitivity is one the major limitations associated with using a human pathogen as an indicator for enteric viruses. Improvements in virus concentration methods or the use of an indicator that is found in higher concentrations (e.g. pepper mild mottle virus) could alleviate these problems. Other limitations of qPCR and NASBA include laboratory time, expense of equipment and reagents, co-concentration of inhibitors (e.g. humics, organics), target virus selection, primer specificity, proper standards and controls. Standards exist for the correct interpretation of qPCR results; however, these guidelines are not widely used among water quality studies [30] . Despite attempts to remove noninfectious viruses prior to amplification-based molecular methods, no effective molecular method exists to differentiate infectious and non-infectious enteric viruses [3, 14, 29 ].
To date, the field of environmental microbiology has focused on increasingly high-tech, molecular methods while overlooking the need for affordable, rapid, and practical approaches to detect enteric viruses [15, 26] . Despite the increasing availability and affordability of molecular methods for the detection of enteric viruses in research settings [3] , these methods remain far out of reach for routine monitoring in high and low income countries alike [29] . This is particularly true for developing countries that have poor sanitation coverage, lack safe drinking water, and rely on wastewater reuse in agriculture [1, 12] . Despite the inadequacy of FIB to predict human health outcomes, lab-free, user friendly, relatively inexpensive FIB culture-based methods exist for different water matrices (IDEXX Laboratories, USA; AguaGenX, LLC, USA). Over the last decade, advancements in nanotechnology have facilitated the development of immunostrip tests (dipstick tests similar in format to a pregnancy test) for the detection of a variety of targets, including human viruses (reviewed in [31] ). Furthermore, other immuno-based rapid detection tests for norovirus (evaluated by [32] ) and even simpler, more affordable tests for the detection of Salmonella Typhi via a flowthrough membrane immunoassay platform have been developed [33] . The application of these technologies to various matrices, likely requiring viral concentration prior to detection of enteric viruses, is essential for the advancement of microbial safety worldwide.
Waterborne viruses associated with wastewater are and will continue to be a major source of morbidity and mortality worldwide. Current microbial quality monitoring of FIB as well as FIB-based QMRAs frequently underestimate viral presence and associated health risks. In order to advance disinfection processes and improve microbial safety with respect to recreational waters, shellfish harvesting areas, and wastewater reuse, it is necessary to incorporate measures of enteric viruses as a means to guide public health policies seeking to minimize risk. As water management practices move away from routine FIB monitoring in favor of a holistic, risk-based approach, it is necessary to improve methods for the concentration and detection of viral pathogens. In conjunction with water management advances, the development of simple, affordable, lab-free tests for the rapid detection of enteric viruses and/or viral indicators is essential for ensuring worldwide improvements in microbial safety. These tests will be particularly valuable to identify unpredictable wastewater contamination events (e.g. infrastructure failure, natural disasters) or in situations where implementing reference pathogen measurements are not economically feasible. Additionally, the availability of userfriendly, point-of-use tests will enable individuals to ensure their own microbial safety.
|
In December 2019, several cases of severe pneumonia of unknown etiology emerged in Wuhan, China. Within a short period after the first case was reported, the outbreak gradually spread across the country and the globe. The causative agent was a betacoronavirus -SARS-CoV-2 -, which elicits a severe acute respiratory syndrome (SARS) called covid-19 1 .
The infectious disease spread rapidly, reaching virtually every country in the world. By the end of the first week of May 2020, there were over 3.8 million confirmed worldwide cases and around 260,000 deaths 2 By May 6 th , Brazil had reported over 125,000 confirmed cases and 8,536 deaths, and a case fatality rate around 7% 3 . In Rio de Janeiro, the first case was reported on March 1 st , 2020. By May 6 th , the state had 13,295 confirmed cases, 1,205 deaths and a 9.1% fatality rate 3 .
The infection often causes mild symptoms, including cough, muscle pain, and anosmia, and it can progress into high fever, pneumonia, respiratory distress 4 and, in some cases, death [5] [6] [7] . Yet, in most cases, individuals have few or no symptoms, being a substantial source of transmission and posing a challenge to prevent disease dissemination 8 .
Reverse transcription polymerase chain reaction (qRT-PCR) is considered the gold standard technique for detecting and confirming covid-19 9 . However, some studies show a high rate of false-negative tests due to some factors that can influence the results, such as: type of biological sample, inadequate collection, fluctuation of viral load, and the period between blood collection and symptom onset 10 . Thus, by performing serological tests we may investigate the presence of acute-phase (IgM) or memory (IgG) antibodies. To facilitate the control of viral transmission and ensure timely public health intervention, it is essential to adopt a simple, sensitive, and specific test, which guarantees immediate and accurate results for promptly identifying SARS-CoV-2-infected patients 11 .
It is relevant to know the prevalence of SARS-CoV-2 among asymptomatic people for two major reasons. First, healthy individuals in epidemic areas may be infected and asymptomatic and still represent a significant source of transmission. At the beginning of the epidemic in China, about 86% of infections were not detected, but they were the source of infection for about 79% of the cases 8 . Second, herd immunity indicates an infection spread within a community. By monitoring its level, we may owe a reference for guiding future decisions on the right time to start relaxing social distancing measures, minimizing possible subsequent epidemic outbreaks 12 .
The seroprevalence of SARS-CoV-2 in asymptomatic groups has been addressed by few studies, among each a major one is the report from the Diamond Prince cruise ship. After an outbreak during the cruise, Japanese health authorities tested 3,063 passengers by RT-PCR and the estimated asymptomatic proportion among all infected cases was 17.9% 13 . A study conducted in the county of Santa Clara, California, USA, found a 2.8% seroprevalence of SARS-CoV-2, after adjusting for test sensitivity and specificity and population demographics 14 .
Evaluating the trends in the prevalence of viral infections in blood donors is essential not only for estimating the effectiveness of strategies for blood safety, but also for enhancing them, reducing the potential risk of infection by blood transfusion 15 . Determining the prevalence of SARS-CoV-2 in blood donors enable the monitoring of the virus circulation among healthy people, helping to implement strategies to reduce transmission, especially in the absence of seroprevalence surveys. Yet, there are but few studies on the prevalence in blood donors. Two of them, sill unpublished, reported 1.7% seroprevalence in blood donors in Denmark, and 2.7% in the Netherlands 16, 17 .
During the final two weeks of April 2020, we conducted a seroprevalence survey among volunteer blood donors of Hemorio, the main blood center in Rio de Janeiro, Brazil. This manuscript reports the prevalence of antibodies to SARS-CoV-2 within a sample of 2,857 volunteer blood donors, adjusting for gender and age group to supply such information to health authorities for estimates, extrapolations, and health interventions. To date, this is the first study in Latin America addressing the seroprevalence of SARS-CoV-2 in asymptomatic blood donors.
Cross-sectional study consisting of serological testing in volunteer blood donors. For the analysis, we considered sociodemographic data -age, gender, donation site (fixed or mobile donation sites) -education level, and place of residence (within the capital or other municipalities of Rio de Janeiro).
The donor management software (SACS) of the Blood Center provided individuals' demographic data using a code, without their identification. The study group is formed by the total number of people who donated blood in the Hemorio Blood Center from April 14 th to April 27 th .
In Brazil, before blood donation, candidates had to complete a written questionnaire and undergo a brief health screening. For candidates to be accepted as blood donors in Hemorio, they had to comply with all the donation eligibility criteria set by the Brazilian Ministry of Health and the American Association of Blood Banks 18 . Recently, some criteria regarding covid-19 have been included: prospective donors could not have had flulike symptoms within the 30 days before donation; had close contact with suspected or confirmed covid-19 cases in the 30 days before donation; or traveled abroad in the past 30 days. Candidates presenting fever (forehead temperature > 37.8 o C) on the donation date are also rejected. Thus, individuals in the study group had no symptoms of covid-19 and no known historical epidemiology of the disease.
Once accepted to donate blood, they were automatically included in the study, provided they agree to sign the informed consent form for blood donation and testing for other pathogens -not included within the infectious diseases markers required to be tested in all blood donations in Brazil. Both blood donation and sample collection were performed at a fixed donation site, Hemorio's facilities, or at mobile sites, in churches and private condominiums, in Rio de Janeiro.
This study was approved by the Research Ethics Committee of the Hemorio -(Approval No: 4.008.095).
All individuals classified as eligible for donation during the study period participated in the survey. We excluded those who refused to sign the informed consent form for blood donation and testing.
The serum used for testing infectious disease markers were also used for SARS-Cov-2 antibody test. At the beginning of blood donation, we collected and barcoded those samples for each donor.
To detect IgG and IgM anti-SARS-CoV-2 antibodies, we performed the rapid test MedTest Coronavirus 2019-nCoV IgG/IgM, from MedLevensohn manufacturer (Yuhang District, China): an immunochromatographic assay which combines SARS-COV-2 antigen-coated particles to qualitatively detect IgG and IgM antibodies. The MedTest Coronavirus (covid- 19) IgG / IgM, licensed by the Brazilian Health Surveillance Agency (ANVISA) in March 2020 (https://consultas.anvisa.gov.br/#/saude/q/?numeroRegistro=80560310056), can detect SARS-CoV-2 antibodies in whole blood, capillary blood, serum, and plasma. We performed the tests with serum, following the manufacturer's instructions.
We tested serum or plasma from antibody-positive samples (IgM, IgG, or IgG + IgM) to detect SARS-CoV-2 by qRT-PCR -(Molecular IDT IntegratedDNA TechnologiesSARS-CoV-2 -N1/N2/P, Promega, Madison, USA).
For RNA extraction, we used MDX Instrument from Qiagen (Hilden, Germany) and Applied Biosystem MDX thermocycler instrument, from Thermo-Fisher (Waltham, USA), following the manufacturer's instructions.
We tabulated the data in an Excel® spreadsheet with donors demographic characteristics reported by code, so that their individual identity would be anonymous.
The prevalence of covid-19 in the population was measured by three steps. First, we reported the crude rates of positive tests without adjustments. Second, we estimated the weighted prevalence using Rio de Janeiro population in 2020. This adjustment was necessary to balance our sample based on population distribution according to gender and age. Third, we adjusted the prevalence for test sensitivity at 85% and specificity at 99%, following the manufacturer's estimates. The true or adjusted prevalence and its 95% confidence interval were set using a previously published estimate 19 .
For statistical analysis, we considered two outcomes: the unadjusted and weighted prevalence of the test for antibodies to SARS-Cov-2. The following variables were also considered: gender, age group (18-29; 30-49; 50+), donation site (Hemorio, churches, condominiums), education level (higher education; secondary education) and place of residence (within the capital or other municipalities in of Rio de Janeiro). To investigate a possible increasing trend, the collection dates were framed into three periods: April 14 th to 18 th ; April 19 th to 23 rd ; and April 24 th to 27 th .
To establish the correlates of SARS-CoV-2 infection, we used logistic regression models and odds ratio (OR). Statistical tests at 5% significance level were adopted for relating the prevalence of antibodies to SARS-Cov-2 (IgG, IgM or IgG+IgM) to donors' characteristics (gender, age group, educational level, place of residence, and donation site and period).
Statistical analysis was performed using version 12 STATA (STATA Corp., College Station, Texas, USA).
The study sample was composed by 2,857 volunteer blood donors, all of which were tested for IgG and IgM anti-SARS-CoV-2. The overall prevalence of antibody was 4%; Tables 1, 2, and 3 show these results in detail.
Regarding the type of antibody detected, IgM-only comprised 23.7% of positive results, IgG-only 11.4%, and IgM+IgG 64.9%. Figure 1 shows the prevalence rates according to period (April 14-18 th , April 19-23 rd , and April 24-27 th ). Table 1 shows four prevalence estimates. The prevalence of SARS-Cov-2 positive tests without adjustments (crude prevalence) was 4.0% (95%CI 3.3-4.7%). The weighted prevalence according to the population of Rio de Janeiro was slightly lower (3.8%; 95%CI 3.1-4.5%). Further adjustment for test sensitivity and specificity resulted in even lower estimates: 3.6% (95%CI 2.7-4.4%) for the non-weighted prevalence, and 3.3 (95%CI 2.6-4.1%) for the weighted prevalence.
In the logistic regression analyses ( Table 2) , some of the covariates were significantly associated with the crude prevalence of antibodies to SARS-Cov-2. Collection period was the variable most significantly associated with the crude prevalence: the later the period, the higher the prevalence. In the third period (April 24-27 th ) the chances of positive test for SARS-Cov-2 antibodies was twice as high as in the first period (April 14-18 th ) (OR = 2.05; 95%CI 1.33-3.16). Regarding sociodemographic characteristics, the younger the blood donors, the higher the prevalence; and the lower the education level, the higher the chances of testing positive for antibodies response to SARS-Cov-2. We found no statistically significant difference for gender and place of residence (capital or elsewhere). Collection site was also significantly associated with the crude prevalence: blood donors from condominiums showed a significantly lower prevalence than blood donors from Hemorio.
We found similar results for the weighted prevalence of antibodies to SARS-Cov-2 ( Table 3 ). The variables found to be significantly associated to the crude prevalence were also significantly associated with the weighted prevalence. However, by weighting the sample, we found a more accentuated statistical significance for the 18-29 age group (OR = 1.86; 95%CI 1.12-3.08%), for lower education level individuals (OR = 2.11; 95%CI 1.35-3.28), and for condominium donors (OR = 0.45; 95%CI 0.23-0.86%). Collection period was also significantly associated to the weighted prevalence (p < 0.005), but OR was a little higher for the crude prevalence.
We tested all the antibody-positive samples -IgG and/or IgM -by PCR, and found no PCR-positive test among them.
In a survey on antibodies responses for SARS-CoV-2 among Brazilian blood donors, we found a seroprevalence of 3.3% (95%CI 2.6-4.1), adjusted for test sensitivity and specificity and weighted according to the population of Rio de Janeiro aged from 18 to 69 years, by age group and gender. This estimate is higher than that observed in two seroprevalence surveys among blood donors, conducted in Denmark and the Netherlands (1.7% and 2.7%, respectively) 16, 17 . The prevalence varied substantially among subgroups: the youngest and less educated presented significantly higher values. We also found an increasing linear trend in the prevalence along the study period: 2.8% during the first week, 4.5% during the second, and 5.3% during the third (p < 0.01); resulting mainly from the increase in IgG + IgM antibodies.
Two months after the first covid-19 case in Rio de Janeiro, over 13,000 confirmed cases and 1,000 deaths had been reported 3 . In the early weeks of March, the state adopted several measures for restricting social interaction and improving diagnostic capacity 20 yet, the epidemic curve is still on the rise and hospital services for covid-19 care face an imminent collapse 21 .
The questions of whether and when such measures should be implemented or strengthened have played a leading role on debates held among public health researchers and professionals, health authorities, and communities. A feasible guide for such decisions is the level of herd immunity within a population: levels around 60% have been considered the threshold for the disease, based on the available estimates of the basic reproduction number of the infectious agent 22 . For the lack of vaccine against the covid-19, such level of herd immunity would only be achieved by natural infection. However, in settings such as Rio de Janeiro, in which a forthcoming breakdown of the health care system is expected, fostering natural herd immunity is an unreasonable option -it would require relaxing the social distancing measures, what would increase the number of deaths by covid-19. Conversely, the effectiveness and length of such measures will decrease the capacity of achieving natural herd immunity, impair the implementation of exit strategies, and increase the risk of future epidemic outbreaks 23 .
Our results indicate that achieving an effective level of herd immunity would be challenging in the short-term. Thus, relaxing social distancing measures might be unwise in the immediate horizon and must be carefully pondered in the future while considering infrastructure availability in hospitals -particularly ICU beds and ventilators, which provide the appropriate care for severe covid-19 patients. It is unclear whether the neutralizing antibody response provides the required effect for preventing new infections 24 . In case just a fraction of the individuals presenting antibodies shows neutralizing antibodies, then the target herd immunity level would have to be increased. In these circumstances, the desired level of herd immunity will most likely not be achieved before an effective vaccine becomes available.
We believe this study comprises the first large seroprevalence survey for SARS-CoV-2 infection in asymptomatic people conducted in Rio de Janeiro, Brazil. The study group is not a random sample, but it accounts for a demographically and socially heterogeneous healthy population, allowing a preliminary outlook of the prevalence of the antibody in asymptomatic individuals. Our estimates were adjusted for test sensitivity and specificity and weighted by population age and gender, providing a better view of the prevalence of the antibody at a population level.
Our results corroborate some basic premises. We found an increasing (and already expected) seroprevalence over time, given that the epidemic curve has been on the rise for the past two months in Rio de Janeiro, without any sign of decreasing 21 . The higher prevalence of the antibody among the youngest was also predictable, as they comprise the core workforce and are more likely to move around, being exposed to the infection even under social distancing restrictions. Likewise, we expected a higher prevalence among the less educated, as they often pertain to lower socio-economic stratum and encounter greater difficulties in following social distancing recommendation for having to look for some source of income. Many of them also live in crowded households, without piped water, hindering the adoption of basic hygiene measures. A study conducted in the state of Ceará found that individuals with primary education considered themselves at lower risk for getting covid-19 and were less engaged in voluntary quarantine than those with higher education levels 25 . At last, we also anticipated that blood donors from condominiums would present lower prevalence, as the donation site is right at their living place, which suggests that they follow social distancing recommendations. Conversely, those donating blood at the Hemorio blood center are more likely to do so while coming to the city center for working or other reasons.
This study results should be deemed with caution. The study groups vary in demographic and social terms, but still comprise a convenience sample. Thus, extrapolating the results for the overall population of Rio de Janeiro or even only for those aged between 18 to 69 years might be biased. We did not selected the sample for providing estimates for different regions of the State, but we expect the prevalence of infection to vary across different geographical areas of the city. At last, we adopted values provided by the manufacturer for the adjusted prevalence estimates for sensitivity and specificity, but they might not be valid for the Brazilian population profile. Yet, the specificity value (99%) was confirmed by a pilot study with 120 plasma samples from Hemorio's blood donor repository, conducted in 2018 -long before the novel Coronavirus pandemic. Among these 120 samples only one tested positive for SARS-CoV-2 antibodies.
Despite the limitations, we may infer that effective levels of natural herd immunity to SARS-CoV-2 are far from being reached in Rio de Janeiro, considering the social distancing implemented measures, and should not be deemed a target for a short-term exit plan. Stipulating the adequate time for relaxing such measures in the short-term should consider the availability of adequate health care infrastructure, until a larger and population-based serological survey could be conducted. Such a survey should aim at identifying the variations in the levels of herd immunity within the state, and eventually recommend a more locally-oriented strategy, considering levels of natural herd immunity, degree of vulnerability of the population, and the availability of adequate resources for testing and treating the severe cases of covid-19.
|
By the end of 2020, about 300,000 infants may be born in the United States to mothers infected by the severe acute respiratory syndrome virus 2 (SARS-CoV-2) sometime during their pregnancy. Millions more will be born to mothers and families who have experienced tremendous stress and change in their daily lives and environments due to the pandemic. Postnatal effects of in utero exposure to SARS-CoV-2 from pregnant women with Coronavirus Disease 2019 (COVID-19) are mostly unknown, but are suggested from the 1918 Influenza pandemic which had lifelong effects on health and achievement. We discuss how maternal infections and in utero exposure to the pandemic environment may affect postnatal development and lifetime cardiovascular, metabolic, and neurodevelopmental health. The effects of SARS-CoV-2 on pregnant women and neonates are compared to other coronaviruses, SARS (severe acute respiratory syndrome) and MERS-CoV (Middle East Respiratory Syndrome). We then suggest indicators of maternal, neonatal, and later life health that should be monitored following in utero exposure to SARS-CoV-2. Lastly, we consider how to disentangle the effects of maternal infection versus the effects of prenatal exposure to the stress of the pandemic environment, both positive and negative.
The 1918 Influenza pandemic (Influenza A, H1N1 subtype) had multiple health consequences for the cohort that was in utero during the pandemic. 1, 2 The U.S. cohort in utero during the peak pandemic period, experienced more depression after age 50, diabetes and ischemic heart disease after age 60, and mortality after age 65, with some variation in estimated effects. [2] [3] [4] [5] The Taiwanese cohort exposed in utero in 1919 had more renal disease, circulatory and respiratory morbidities, and diabetes after age 60. 6 The corresponding Swedish cohort had higher morbidity at ages 54-87, indicated by excess hospitalization and male mortality from heart disease and cancer. 7 The effect on late life health varied by trimester of development at the pandemic peak. Elevated heart disease in the U.S. was associated with second trimester exposure at the peak for the 1918-1919 cohort; while diabetes was associated with third trimester exposure at the peak.
No data on birth weight are available for the 1918 Influenza cohort; however, there are many indications that early life physical health and neurodevelopment were directly affected either by in utero exposure or indirectly by its multifarious stresses. Men in the 1918-1919 U.S. flu cohort were shorter at WWII enlistment than flanking cohorts, 2 while children and adolescents in Taiwan were shorter than surrounding cohorts. 6 U.S. women exposed in utero married men with less education. 3 Educational deficits in the U.S. and Taiwan for these cohorts could be related to early cognitive deficits. 1, 6 Lower socioeconomic status (SES) as adults for those in utero during the 1918 epidemic was also confirmed in the parents of the Wisconsin Longitudinal Study participants, 8 but was not found in Sweden. 7 These historical associations for the cohort in utero in 1918-1919 are based solely on birth dates and cannot identify individuals whose mothers actually experienced infections. Thus, the analysis is restricted to population data on timing of infections and birth dates related to outcomes many decades after the pandemic. Thus we cannot readily distinguish direct effects on long-term health and well-being due to exposure to the infection itself versus the stress effects due to exposure to the pandemic environment. These socio-environmental stressors include potentially increased levels of uncertainty, undernutrition, under-and unemployment, poverty, and loss of loved ones. Despite this, the findings from the 1918 flu pandemic of lasting consequences for those exposed prenatally warrants study of similar programming effects from the current pandemic. Unlike the earlier pandemic, the current pandemic provides opportunity to study individuals with known in utero exposure to SARS-CoV-2 throughout their life based on known maternal infection and socio-environmental stresses. We can add further critical comparisons to individuals exposed in utero to the pandemic environment but with no maternal infection, which potentially allows disentanglement of the effects of exposure to infection versus pandemic environment.
Maternal viral infections can affect the fetus or neonate through multiple pathways. Maternal infections can directly infect the fetus as well as the placenta. 9 Viral species differ markedly in their impact on pregnant women and fetuses. For instance, HIV and hepatitis B virus are vertically transmitted in up to 40% of cases without the successful interventions that have almost eliminated direct transmission in the U.S. Fortunately, maternal transmission is rare for the coronaviruses MERS-CoV (2012) and SARS-CoV (2002) 10 and the 2009 H1N1 influenza virus. 11 And, no severe birth defects have been associated with prior coronaviruses, MERS-CoV (2012) and SARS-CoV (2002) . In general, pregnancy increases virus-associated morbidity and mortality, as shown for influenza H1N1 in 2009 and SARS-CoV. 12, 13 These infections also caused more preterm delivery and low birth weight. 13 Virus-related maternal inflammatory processes can impact the placenta and disturb in utero metabolism to cause intrauterine growth restriction (IUGR). 14 Long-term consequences include impaired glucose and insulin metabolism and hyperactivity of the hypothalamic-pituitary-adrenal (HPA) axis. 15 Historically, Barker and colleagues showed that IUGR and low birth weight which could be related to maternal infection or poor nutrition are associated with later life cardiovascular disease and obesity. 16 In the Helsinki Birth Cohort (1934) (1935) (1936) (1937) (1938) (1939) (1940) (1941) (1942) (1943) (1944) , which suffered a suboptimal nutritional environment, smaller placentas predicted larger childhood body mass index (BMI) and increased adult cardiovascular disease. 17 Maternal immune response to viruses and bacterial coinfections can impact fetal development with multi-generational behavioral and cognitive consequences. 18
We summarize the main points of the COVID-19 literature published thus far, bearing in mind that these findings are preliminary as they are based on small numbers, specific locations, and, often, more severely ill patients.
Pregnant women do not appear to be at increased risk of contracting SARS-CoV-2, as their rates of infection seem to parallel the rates of infection in their surrounding communities. When pregnant women do contract the SARS-CoV-2 virus, the majority experience a similar clinical course as age-matched non-pregnant women; however, reports increasingly show need for more hospitalization, mechanical ventilation, and intensive care. 19, 20, 21 Thus far, the overall mortality rate from COVID-19 in pregnant women is not significantly different from non-pregnant women. 19 As in the general population, a minority of pregnant patients develop severe and critical illness, with cardiopulmonary compromise, and even multi-organ failure. SARs-CoV-2 can cause an exaggerated inflammatory response and coagulation activation in some individuals, causing worse outcomes. Whether the immunologic changes of pregnancy affect the exaggerated inflammatory response is unknown. The limited data suggests that COVID-19 pregnancies have similar elevations of cytokines (IL-6, TNFα) and chemokines (CCL2, CXCL10) as other COVID-19 patients. 22 Some postulate that the anti-inflammatory state of pregnancy required to prevent rejection of the fetus may actually help fight the inflammatory cascade caused by COVID-19. 23 Severe COVID-19 disease is also associated with increased rates of thromboembolic events, despite prophylactic anticoagulation. 24 Although pregnancy increases the risk of thromboembolic events, and COVID-19 could further increase thromboembolic events in pregnancy, this possibility has not been studied.
Little is known about the impact of the SARS-CoV-2 virus on the placenta and the impact of the associated inflammatory and thromboembolic responses on the placenta. There are some reports of neonatal infections from SARs-CoV-2 infection of the placenta and its transplacental transmission. 25 The inflammatory and thromboembolic responses related to maternal COVID-19 disease may affect placental growth, vascular perfusion, and hypertensive disorders of pregnancy. One study of third trimester placentas from women infected with SARS-CoV-2 showed a significant increase in features of maternal vascular malperfusion, which may be related to the inflammatory and thromboembolic response. 26 Importantly, these patients had moderate-to-severe COVID-19, but none were intubated. Together, the risks of these pathological markers was 3.3-fold above reference non-exposed population.
Rates of preterm birth may be increased by maternal SARS-CoV-2 infections, with estimates ranging from 12% to 27%. 27, 28 These may be overestimates because they over-represent the more severely ill pregnant women. The increased rates of preterm delivery may be related to fever and hypoxemia which increase risk for preterm labor, premature rupture of membranes, and abnormal fetal heart rates. However, asymptomatic but infected pregnant women also showed more preterm delivery than non-infected pregnant women. Estimates of low birth weight range from 12% to 20%. 20 A metaanalysis reported 15.6% low birth weight which did not significantly differ by maternal COVID-19 status. 29 Importantly, studies from Ireland and Denmark as well as anecdotal reports by doctors from several regions around the world suggest decreasing levels of preterm birth/birth of very low birthweight infants in the general population during the pandemic. 30, 31 This suggests that for some, the measures taken to control spread of the virus may have had some positive externalities. Additionally, rates of C-section delivery are increased in pregnant women with SARS-CoV-2 infections. In a systematic review, C-sections occurred in 52%-85% of COVID-19 deliveries, which may also be overestimated. 27 Early in the pandemic, C-section was the primary mode of delivery in some hospitals; given the lack of information about vertical transmission, there was concern that vaginal delivery might increase risk of infection in the neonate. C-section rates may also be increased due to abnormal fetal heart rate tracings related to maternal hypoxia and fever. Additionally, C-section may be chosen to relieve severe maternal disease, resulting in preterm birth. 32 Stillbirths among women with COVID-19 so far have been rare and seem to have been related to very severe maternal illness. Infected mothers in one prospective study had significant severalfold increased rates of stillbirth. 33 In another study, there was a general increase in stillbirths in the pandemic period in Britain. 34 Data on effects of first and second trimester maternal infections is very limited, as is data for spontaneous abortions. There is a theoretical concern that fever in the first trimester may increase risk for miscarriage and congenital anomalies. Further studies of the outcomes of maternal infections, especially in the first and second trimesters are needed.
Most newborns born to mothers with COVID-19 disease have been healthy, and most commonly suffered from sequelae of prematurity rather than the effects of SARS-CoV-2. The rate of congenital infection from SARS-CoV-2 at this time is estimated at <3%, approximating the rates of other congenital infections. 35 In utero transmission needs further study to substantiate reports of transmission, based on polymerase chain reaction assay for SARs-CoV-2 in the neonate and/or placenta, or elevated IgM antibodies. 25 The few case reports of potential in utero transmission are complicated by the initial lack of criteria defining vertical transmission. Horizontal spread of the disease to neonates from family members or other household contacts occurs and is generally mild. 36 Ongoing/future studies We suggest that to capture the consequences of viral exposure in utero for childhood development and adult health COVID-19 birth cohort studies consider immediate collection of data from the mother, fetus, neonate, and placenta. These initial data should be followed by analysis of child growth and development and lifelong study of health, behavioral patterns, and cognitive functioning. Table 1 suggests multiple indicators for immediate measurement and measurement across the lifespan. Initial indications of disease presence, exposure, and severity are important including inflammatory cytokines in the mother and infant. Clinical data on the size and development of the infant both before and at birth and the placenta at birth should be routinely collected. Because relatively simple anthropometric measures routinely taken at birth serve only as a very crude proxy measure of the prenatal environment, we encourage studies using longitudinal ultrasound measurements of fetal development. Because hypocortisolemia was common in SARS-CoV-1, HPA axis activity should be studied in severe COVID-19 mothers (ChiCTR20000301150). 37 Maternal genetics may also influence vulnerability to infection and impact on the fetus, shown for adult carriers of ApoE alleles. 38 Molecular studies could include the placental transcriptome for genetic variants of gene expression (eQTLs) that predict childhood obesity and BMI. 39 Epigenomic methylation of DNA and histones should be monitored across the lifespan of the infant. DNA methylation measures represented as epigenetic clocks that represent "aging" can be estimated in pediatric and then adult years. 40 Socioeconomic status and educational status of the mother and child should also be assessed, as causes of adverse outcomes can be social as well as biological. Additionally, COVID-19 infection appears to be more common and severe in racial and ethnic minorities in the U.S. 41 Moreover, preexisting conditions increase the severity of COVID-19 infection, which may have implications for transmission and neonatal outcomes. Our recommendations are oriented toward the United States as the situation may differ in low-and middle-income settings.
The child should be followed for indicators of growth, motor, and neurological development milestones. Throughout life, the standard blood-based indicators should be collected. Composite measures derived from clinical chemistry and physical exams can be used starting in childhood to categorize the "Pace of Aging." 42 Follow-up should include information on development of both pediatric (e.g., leukemia, asthma) and adult diseases (e.g., heart disease, cancer, stroke).
The COVID-19 pandemic also provides an opportunity to study the effects of in utero exposure to the pandemic environment and the effects of different regional and national policies to address viral spread. The COVID-19 pandemic has increased levels of stress, unemployment, food insecurity, and domestic violence, and diminished or disrupted prenatal care. These factors may have programming effects and/or direct effects on neonatal health. The pandemic-related lockdown policies may benefit some individuals; pregnant women staying home who may rest more, and have decreased exposure to other infections, and women with job stability, may benefit from less stress from work and commuting time. These benefits are evidenced by reports of decreased rates of preterm birth for the general population during the pandemic.
Studies would include mother and child dyads and a selection of mothers with symptomatic and nonsymptomatic COVID-19 diagnosis, as well as a group of mothers and babies with no in utero exposure. We suggest the desirability of enrolling an additional child born either shortly before or shortly after the COVID-19 pandemic to the same mother. These study groups would allow comparison between infants who were exposed to maternal infection and those who were prenatally exposed to the pandemic without having experienced maternal infection. These infants can also be compared to those who were born before or conceived after the pandemic, providing further insight on the effects of the pandemic environment. The inclusion of information on social and economic stresses will allow comparisons between countries taking different measures to reduce spread of the virus, These types of comparisons may give us further insights beyond the effects of COVID, such as socioeconomic and social policies that may decrease risk of preterm birth, which has eluded the maternal-infant health community for decades. This should be balanced by further investigating reports of generally increased stillbirth and potential contributors.
Maternal COVID-19 infections and exposure to the pandemic environment may have long-term effects on growth and aging for the cohort in utero during this pandemic based on the 1918 influenza birth cohort. From comparisons of the 1918 Influenza with the current COVID-19 pandemic, we suggest studies of markers from birth through adulthood that are indicators of the altered development and accelerated aging that could be experienced in the century ahead.
We need to evaluate whether the effects of COVID-19 can be generalized to other viruses. While the current pandemic is similar in some ways to the earlier, there are important differences. COVID-19 resembles the 1918 Flu in that many infections were relatively uncomplicated cases, but deadly for a small proportion of cases. Surprisingly, given the 100 year difference in time, the mortality during the two pandemics appears fairly similar in New York at the height of the pandemic; an increase over baseline of about 200 deaths per 100,000 population in the 1918 epidemic and an increase of about 150 per 100,000 in 2020. 43 A key difference between the 1918 pandemic and the current COVID-19 pandemic is the more significant role of secondary bacterial infections, in comparison to the current severe lung disease, multi-organ damage, inflammatory response, and thromboembolic tendency reported for the most serious cases of COVID-19. Secondary bacterial infections related to COVID-19 have mostly been effectively managed with antibiotics. Additionally, the 1918 flu was particularly deadly for people aged 20 to 40, or those in the childbearing years; in contrast, the SARS-CoV-2 virus causes higher mortality among older persons. While the long-term outcomes may not be the same as 1918, it is important to study the effects of SARS-CoV-2 during pregnancy in terms of consequences for later life health and aging.
|
plasmid in which the N-terminus (1-61 aa) of the Beta-1,4galactosyltransferase 1 was fused with a cyan fluorescent protein variant, mTurquoise2. 3 In this system, we only need to stain the viral proteins with anti-FLAG antibody. As shown in Fig. 1a (right) and the Supplementary Fig. S3b , the co-transfected cells were fixed for IFA and the viral proteins were immuno-stained in red fluorescence. After merging different colors, the results showed ORF6, ORF7a and NSP15, like M protein, colocalized with Golgi apparatus. Therefore, we identified four SARS-CoV-2 proteins (M, ORF6, ORF7a and NSP15) that are related to Golgi apparatus.
Like other positive-stranded RNA viruses, SARS-CoV-2 RNA is transported to endoplasmic reticulum (ER) after viral entry. ER is the major cellular organelle that viruses need to usurp because it is a factory for production of viral proteins. Most proteins of SARS-CoV-2 were seen in cytoplasm as shown in the Supplementary Fig. S2 , so we asked whether they colocalize with ER. To that end, we cotransfected several viral protein-expressing plasmids (NSP6, ORF7b, ORF8 and ORF10) together with pmcCh-sec61-beta (ER and the ER-Golgi apparatus intermediate compartment). ER is in red fluorescence because it is tagged with mCherry. The viral proteins (NSP6, ORF7b, ORF8 and ORF10) were shown in green fluorescence by anti-FLAG. Although SARS-CoV-2 proteins are all generated in ER, IFA found only NSP6, ORF7b, ORF8 and ORF10 colocalized with ER as shown in Fig. 1b and the Supplementary Fig. S3c . The yellow color in the merged pictures was caused by the colocalization between the viral proteins and ER protein: sec61 beta. ORF7b is a 43 aa protein, ORF8 has only 121 aa and ORF10 contains 38 aa. Although they are small proteins, their functions might be important for viral replication and need to be further investigated.
Endosome is a cellular organelle with a membrane in eukaryotic cells and undergoes a maturation from early endosome to late endosome depending on acidification. The late endosome then fuses with the lysosome to degrade the molecule by lysosomal hydrolytic enzymes. Here we used the plasmids expressing the proteins standing for early endosome (Rab5), endosome (Rab11), late endosome (Rab7), and lysosome (Lamp1), 4 which were cotransfected with SARS-CoV-2 protein-expressing plasmids. We identified ORF3a to be the viral protein that is associated with the formation of endosome and lysosome (Fig. 1c and the Supplementary Fig. S3d ). Our IFA results showed that only ORF3a is associated with endosome and lysosome. To confirm the specificity of our IFA assay using the co-transfection system, we also co-transfected ORF3a-expressing plasmid with an ER & Golgi apparatus intermediate protein, Rabin8 that is tagged with GFP. No significant colocalization was detected between ORF3a and Rabin8. Therefore, ORF3a protein is related to the endocytosisrelated biological activities. Interestingly, for the first time, we found that the N protein colocalizes with lipid droplet (LD) that was visualized by BODIPY 500/510 in the Caco-2 cells (Supplementary Fig. S4 ).
Interestingly, some SARS-CoV-2 proteins are detected in nucleus such as NSP1, NSP5, NSP9 and NSP13 as shown in the Supplementary Fig. S2 . For these nuclear proteins, we decided to know if they interact with any nuclear structures such as SC (splicing compartment) that is important for gene splicing. As shown in the Supplementary Fig. S5 , we didn't detect any relationship between NSP1 and SC35. Both NSP5 and NSP9 distribute diffusely in the nucleus, but in the strongly stained spots of NSP5 or NSP9, SC35 appears to colocalize with the viral proteins. Interestingly, NSP13 exists in the nuclei of HEp-2 cells as round "dots" (shown by white arrows) that exactly colocalize with SC35 ( Supplementary Fig. S5 ). This phenomenon was also found for Zika virus that NS5 to interact with SC35. 5 A similar experiment was performed in Caco-2 cells for NSP13. We found the same results that NSP13 colocalizes with SC35 in the nuclei (Supplementary Fig. S6 ).
In summary, we molecularly cloned all the genes of SARS-CoV-2 and applied a systemic IFA to characterize the subcellular distribution of the viral proteins. Our results provide the field with new insight into the biological functions of SARS-CoV-2 proteins because the localization of the protein to the site of a cell implies that the protein might play its biological function in the subcellular location. However, a detailed study should be conducted in the context of SARS-CoV-2 infection in cells because viral proteins from transfection may behave differently than that from viral infection.
|
Digital surveillance methods, such as location tracking apps on smartphones, have been implemented in many countries during the COVID-19 pandemic, but not much is known about predictors of their acceptance. Could it be that prosocial responsibility, to which authorities appealed in order to enhance compliance with quarantine measures, also increases acceptance of digital surveillance and restrictions of privacy? In their fight against the COVID-19 pandemic, governments around the world communicated that self-isolation and social distancing measures are every citizen's duty in order to protect the health not only of oneself but also of vulnerable others. We suggest that prosocial responsibility besides motivating people to comply with anti-pandemic measures also undermines people's valuation of privacy. In an online research conducted with US participants, we examined correlates of people's willingness to sacrifice individual rights and succumb to surveillance with a particular focus on prosocial responsibility. First, replicating prior research, we found that perceived prosocial responsibility was a powerful predictor of compliance with self-isolation and social distancing measures. Second, going beyond prior research, we found that perceived prosocial responsibility also predicted willingness to accept restrictions of individual rights and privacy, as well as to accept digital surveillance for the sake of public health. While we identify a range of additional predictors, the effects of prosocial responsibility hold after controlling for alternative processes, such as perceived self-risk, impact of the pandemic on oneself, or personal value of freedom. These findings suggest that prosocial responsibility may act as a Trojan horse for privacy compromises.
In response to the COVID-19 pandemic, governments around the world besides appealing to people to comply with self-isolation and social distancing recommendations have also resorted to digital surveillance measures (Calvo et al., 2020) . One of the most common forms of surveillance implemented is the use of smartphone location data (Amit et al., 2020; Heaven, 2020, March 17) .
For example, Israel has been using a technology originally developed for counterterrorism purposes to track the mobile phones of civilians in order to contain the spread of the virus (Livni, 2020, March 17) . China has been tracking citizens in many cities through a smartphone app that assigns a green, yellow, or red color code as indication of one's health status (Mozur et al., 2020, March 1) . Even in privacy-conscious Europe, Austrian health authorities encouraged citizens to download a contact-tracing app developed for the pandemic by the Austrian Red Cross (Birnbaum and Spolar, 2020, April 18) . Although these measures have been imposed for the protection of public health, they have stirred controversy due to potential threats to personal privacy and civil rights (Abbas et al., 2020; Calvo et al., 2020; Roth et al., 2020; Singer and Sang-Hun, 2020, April 17) . Essentially, their implementation may result in the protection of public health at the price of a loss of individual freedoms.
In this research, we explore factors that make people accept such losses of individual freedoms. In particular, we focus on perceptions of prosocial responsibility as a factor that makes people willing to pay that price in a pandemic and accept an increase in digital surveillance. In the context of this research, we define prosocial responsibility as a state of heightened awareness that one's behavior has consequences for others coupled with concerns about their well-being. In the COVID-19 pandemic, authorities have extensively appealed to prosocial responsibility as a way to motivate people to adhere to self-isolation and social distancing measures. Compliance with these measures is crucial in the fight of the pandemic. Literature shows that feeling responsible for others can have a large impact on people's motivation and behavior. For example, consumers are willing to incur costs to buy products if they believe that these have a positive impact on society (Small and Cryder, 2016) , or taxpayers support taxation if they recognize that their tax contributions help fellow citizens (Thornton et al., 2019) . Research in ethical decision-making suggests that people do not want others to think about them that they are behaving selfishly; instead, they enjoy reputational benefits, such as respect and admiration, if they behave in line with what is considered normatively 'good' (Van Bavel et al., 2020) .
More specific to the topic of the present investigation, the COVID-19 pandemic, a recent review of 3,166 papers on the psychological impact of quarantine demonstrated the power of appeals to benefits for others (Brooks et al., 2020) . Reminding the public about the benefits of self-isolation to society can buffer against the negative consequences of quarantine. Specifically, it has been argued that "reinforcing that quarantine is helping to keep others safe, including those particularly vulnerable . . . can only help to reduce the mental health effect and adherence in those quarantined" (Brooks et al., 2020, p. 919) . Apparently, feeling that others will benefit from one's behavior increases the willingness to endure stressful situations such as self-isolation and makes these situations easier to bear. But do people's feelings of prosocial responsibility also affect their acceptance of flanking surveillance measures?
In this research, we argue that perceived prosocial responsibility increases both compliance with anti-pandemic measures and support for surveillance, and civil rights and privacy restrictions. Regardless of whether an elevated sense of prosocial responsibility implicitly shifts mental weights from individual to public rights or whether it operates at an affective level that is fueled by the desire to avoid the emotional burden of feeling responsible for others' suffering, people might feel that the protection of their individual rights matters less than the protection of a common good, such as public health. A sense of prosocial responsibility may act as a blanket measure that heightens a person's focus on others' well-being at the expense of tuning down the fight for individual rights. Thus, we predict that people with higher prosocial responsibility both comply more with quarantine measures, and are also more willing to accept radical measures restricting individual rights in general and privacy more specifically.
We tested these predictions with an online study conducted during the COVID-19 pandemic in the US. Specifically, we examined whether prosocial responsibility predicts on the one hand compliance with self-isolation and self-distancing measures, as prior literature suggests, and on the other hand acceptance of digital surveillance and restrictions of individual rights and privacy, as we propose. In addition, we add valuable insights by assessing and controlling for several relevant variables that could also play a role. Specifically, we included variables that address vulnerability to COVID-19 (perceived self-risk, perceived close other-risk, COVID-19 health status, perceived impact on various facets of one's life, and perceived impact on state), potentially relevant personality traits (narcissism, belief in free will, helplessness, and value of freedom), and demographic variables (age, sex, urban/rural area, and political affiliation).
We recruited 302 US residents online (Prolific). Four participants who failed an attention check (to select a specific answer in one question) were excluded from further analyses. The final sample comprised 298 participants (133 men, 165 women, age 18-80, M = 50.71, SD = 20.62). A sensitivity power analysis showed that this sample size can reliably detect small to medium effect sizes of ρ = 0.16 (two-tailed) with an alpha level of 0.05 and power of 0.80.
The study was conducted online on May 17, 2020. The following predictor and outcome variables were assessed.
It was assessed with six items (α = 0.89): "In this COVID-19 pandemic, I feel responsible for the health and life of others, " "In this COVID-19 pandemic, I am doing everything I can to minimize the chances of putting others at risk, " "In this COVID-19 pandemic, I would have a bad conscience if I did something that puts vulnerable people's health at risk, " "In this COVID-19 pandemic, I feel that my acts have consequences on the lives of others, " "In this COVID-19 pandemic, I would hate it if I did anything that risks vulnerable people's lives, " and "In this COVID-19 pandemic, not complying with the measures would make me feel almost like a criminal" (1 = strongly disagree; 7 = strongly agree).
We included several variables that broadly tap vulnerability to the virus. Vulnerability has been shown to be a factor making people susceptible to conformity (Murray and Schaller, 2012; Wu and Chang, 2012) and, thus, might also increase acceptance of restrictions of individual freedoms.
It was assessed with four items (α = 0.91): "I consider myself to belong to a high-risk group regarding COVID-1, " "I think I would be severely affected if I am infected with COVID-19, " "I think my life would be at risk if I am infected with COVID-19, " and "In general, I worry about my health with regards to COVID-19" (1 = strongly disagree; 7 = strongly agree).
It was assessed with four items similar to perceived self-risk (α = 0.94): "I have close others (family, friends, or relatives) who belong to a high-risk group regarding COVID-19, " "Some of my close others (family, friends, or relatives) might be severely affected if they are infected with COVID-19, " "The life of some of my close others (family, friends, or relatives) might be at risk if they are infected with COVID-19, " "In general, I worry about the health of some of my close others (family, friends, or relatives) with regards to COVID-19" (1 = strongly disagree; 7 = strongly agree).
Participants indicated whether they had been tested positive for coronavirus themselves (1 = yes; 2 = no; 3 = rather not say), and the same for any of their close relations (family, close friends).
Participants were asked how negatively or positively the COVID-19 pandemic has affected each one of the following facets of their lives: job, income, emotional well-being, physical well-being, personal relationships (1 = very negatively; 7 = very positively).
We measured how badly the state where they had been during lockdown was hit by COVID-19 (1 = not at all badly; 7 = very badly).
Additionally, we included the following potentially relevant personality traits.
Narcissists are self-absorbed and manipulative individuals with a strong sense of specialness and entitlement, a lack of empathy, and a proclivity to exploitation (Thomaes et al., 2018) . Therefore, it is reasonable to assume that narcissists should be less likely to comply with measures that stress the protection of others (Grover, 2020, April 18) , let alone limit their own freedoms for the common good. Narcissism was assessed with a scale adopted from Webster and Jonason (2013) , which comprises four items (α = 0.82; e.g., "I tend to want others to admire me"; 1 = strongly disagree; 7 = strongly agree).
This is another relevant predictor because it corresponds to a combination of responsibility and autonomy (Nahmias et al., 2005) . Believing in free will entails acceptance that individuals are autonomous and responsible and have the capacity to act in different ways in the same situation. Belief in free will was assessed with the free will subscale of the FAD-Plus (Paulhus and Carey, 2011) , which comprises seven items (α = 0.85; e.g., "People must take full responsibility for any bad choices they make, " "People have complete free will"; 1 = strongly disagree; 7 = strongly agree).
It refers to the feeling that one has no control over a situation due to repeated experiences with aversive stimuli, which can lead to failure to use opportunities to avoid these stimuli, even when control is possible (Seligman, 1972) . Privacy is essentially linked to personal control (Brandimarte et al., 2013) . Therefore, people who feel helpless and deprived of personal control might also be less motivated to protect their privacy and safeguard their individual rights, even when they have the opportunity to do so. Helplessness was assessed with the perceived helplessness subscale of the Depressive Attributions Questionnaire (Kleim et al., 2011) , which comprises four items (α = 0.86; e.g., "I feel helpless when bad things happen"; 1 = strongly disagree; 7 = strongly agree).
Individual differences in the value of freedom might also predict the extent to which individuals are willing to sacrifice privacy and individual rights. Participants ranked nine values taken from the Rokeach Value Survey (Rokeach, 1973) into an order of importance to them, as guiding principles in their life. Of interest to this study were the values "Freedom (independence, free choice)" and "National security (protection from attack)." We created a new variable that indicates how much higher freedom is ranked compared to national security by subtracting the freedom rank from the national security rank.
We collected information about sex, age, area (1 = rural; 7 = urban), and political affiliation (1 = democrat; 7 = republican).
Compliance with measures against COVID-19 ("To what extent have you been following these measures in the past months?")
was measured with two items in two domains (α = 0.68) 1 : "Selfisolation (staying home even without having any symptoms)" and "Social distancing (maintaining a safe distance from others)" (1 = Never; 2 = Rarely; 3 = Sometimes; 4 = About half the time; 5 = Frequently; 6 = Most of the time; 7 = Always).
It was measured with two items (α = 0.95) following a short explanation that "as a way to deal with the COVID-19 pandemic, several countries have adopted measures that require extensive surveillance (e.g., through collecting data on people's mobile phones and monitoring their movements)": "In your opinion, do governments have the right to limit people's privacy and impose surveillance for the protection of public health?" and "Are you willing to sacrifice your privacy and accept surveillance for the sake of public health?" (1 = definitely no; 7 = definitely yes).
It was assessed by summing up how many of the following seven actions participants have already done as a way to combat the pandemic (α = 0.58) 2 : "Install an app on your mobile phone that monitors information about your movements (e.g., where you are going), " "Install an app on your mobile phone that monitors information about your physical contacts (e.g., with whom you are in contact), " "Wear a bracelet that monitors information about your movements (e.g., where you are going), " "Wear a bracelet that monitors information about your physical contacts (e.g., with whom you are in contact), " "Wear a bracelet that monitors information about your health (e.g., your temperature), " "Allow companies (e.g., airlines, your employer) to have access to your medical records, " "Allow companies (e.g., cafes and restaurants, stores) to measure your temperature before entering a venue."
It was assessed with seven items (α = 0.92) asking participants to indicate their willingness to accept the same measures as in past surveillance acceptance in the future ("How willing are you to do the following in order to fight against the current pandemic or other similar pandemics in the future?"; 1 = not willing at all; 7 = very willing).
Participants first read that "in times of crises, leaders and policy-makers sometimes have to take decisions that require a trade-off between individual rights (freedom, autonomy, privacy, self-determination) and public health." As an example, it was mentioned that "in the current pandemic, world leaders restricted some individual rights for the sake of protecting all citizens' health." Then, participants indicated what they would prioritize if such a trade-off were inevitable with a single item ("In your opinion, whenever such a trade-off is inevitable, what should be prioritized, individual freedoms or public health?"; 1 = definitely individual freedoms; 6 = definitely public health).
Descriptive statistics and inter-correlations of all variables are presented in Table 1 . Inspection of correlation coefficients indicates that prosocial responsibility was positively correlated with compliance with measures to fight COVID-19, r = 0.50, p < 0.001; willingness to sacrifice privacy, r = 0.46, p < 0.001; past surveillance acceptance, r = 0.11, p = 0.059; willingness to accept surveillance, r = 0.41, p < 0.001; and prioritizing public health over individual freedoms when a trade-off between the two is inevitable, r = 0.57, p < 0.001.
We first examined whether a higher sense of prosocial responsibility is associated with higher compliance with selfisolation and social distancing measures after accounting for all control variables in a step-wise linear regression analysis. In the first step, prosocial responsibility served as predictor and compliance with measures as outcome variable. Results showed that prosocial responsibility was a significant predictor of compliance, B = 0.42, SE = 0.04, β = 0.50, p < 0.001. In step two, we entered as control variables all additional predictors listed in Section "Method." Results showed that prosocial responsibility remained a significant predictor of compliance after controlling for these 18 variables, B = 0.29, SE = 0.05, β = 0.34, p < 0.001 (see detailed results in Table 2 ). In line with prior research (Brooks et al., 2020) , people who feel more responsible toward others were more likely to comply with the measures that have been imposed to combat the pandemic.
We then tested whether a higher sense of prosocial responsibility is associated also with a higher willingness to sacrifice privacy for the sake of public health. Results showed that prosocial responsibility was a significant predictor of willingness to sacrifice privacy, B = 0.10, SE = 0.11, β = 0.46, p < 0.001. Moreover, prosocial responsibility remained a significant predictor of willingness to sacrifice privacy after entering all control variables, B = 0.69, SE = 0.13, β = 0.32, p < 0.001 (see detailed results in Table 2 ). Therefore, people higher in prosocial responsibility were more willing to sacrifice their privacy for the sake of public health.
Another linear regression showed that prosocial responsibility was a marginally significant predictor of past surveillance acceptance, B = 0.07, SE = 0.04, β = 0.11, p = 0.059. After controlling for the same variables as above, prosocial Maximum 7.00 7.000 7.00 7.00 7.00 7.00 6.75 8.00 80 -7.00 7.00 * p < 0.05; * * p < 0.01; 1 1 = rural, 7 = urban; 2 1 = democrat, 7 = republican.
Frontiers in Psychology | www.frontiersin.org Unstandardized coefficients are provided. * p < 0.05; * * p < 0.01; 1 Dummy coded (1 = yes); 2 Dummy coded (1 = male); 3 1 = rural, 7 = urban; 4 1 = democrat, 7 = republican.
responsibility became a significant predictor of past surveillance acceptance, B = 0.13, SE = 0.05, β = 0.19, p = 0.010 (see detailed results in Table 2 ). Therefore, people who feel more responsible toward others in the pandemic have already accepted more surveillance measures.
Results of a linear regression analysis indicated that prosocial responsibility also predicted willingness to accept surveillance in the future, B = 0.72, SE = 0.09, β = 0.41, p < 0.001. The effect of prosocial responsibility on willingness to accept surveillance remained significant after entering the control variables, B = 0.54, SE = 0.12, β = 0.31, p < 0.001 (see detailed results in Table 2 ). Thus, prosocial responsibility did not only predict past surveillance acceptance but also willingness to accept surveillance in the future.
We conducted another regression with prosocial responsibility as predictor and the dilemma between individual freedoms and public health as outcome variable. Results showed that prosocial responsibility was significantly associated with a preference for public health over individual freedoms, B = 0.83, SE = 0.07, β = 0.57, p < 0.001. This association remained significant after controlling for the same variables as before, B = 0.51, SE = 0.08, β = 0.35, p < 0.001 (see detailed results in Table 2 ). That is, the stronger a person's sense of prosocial responsibility, the more likely that person prioritizes public health over individual freedoms.
During the COVID-19 pandemic, governments around the world emphasized responsibility toward others as a way to enforce selfisolation and social distancing. In line with a recent review of the literature, which advises public health officials to emphatically communicate the benefits of self-isolation for others (Brooks et al., 2020) , we found that a stronger sense of prosocial responsibility predicted compliance with self-isolation and social distancing measures. At the same time, our findings suggest that prosocial responsibility is also associated with acceptance of restrictions of privacy and individual rights. Apparently, feeling responsible for others leads people to devalue their own rights. Critically, this holds over and above a host of alternative explanations and related variables, such as how much they believe that they personally or their close others are at risk, how much they value freedom, or how negatively various facets of their lives have been affected by the pandemic. This finding implies that prosocial responsibility can be a double-edged sword. On the one hand, it enhances compliance with self-isolation and social distancing, which is of paramount importance in pandemic crises. On the other hand, prosocial responsibility might constitute a Trojan horse for privacy undercuts because it makes people generally accept a loss of individual rights. This finding echoes growing concerns about the potential misuse of digital surveillance methods during the pandemic (e.g., Abbas et al., 2020; Calvo et al., 2020; Roth et al., 2020) and highlights a potential long-term side-effect that may eventually turn out detrimental for all individuals.
Our research contributes to the literature on the effectiveness of prosocial appeals more broadly (e.g., Small and Cryder, 2016; Thornton et al., 2019) , by highlighting the role of prosocial responsibility in the fight against a pandemic (Brooks et al., 2020) . Moreover, our findings contribute to the privacy literature. Thus far, the privacy literature has focused on the individual when examining predictors of privacy behavior, such as desire for control over personal information (Phelps et al., 2001) , knowledge about risks (Park et al., 2012) , and privacy concerns (Gerber et al., 2018) . Our research adds a novel social dimension to recent research, which has begun to investigate the interdependent aspects of privacy (Kamleitner and Mitchell, 2019) . In many situations, individuals endanger others' privacy for their self-interest (e.g., when allowing apps access to their contacts). Here, we show the opposite. Out of concern about others, individuals might endanger their own privacy. Both studies underscore the role of social context in people's privacyrelated behaviors and point out the need for more research in this direction.
Besides the crucial role of prosocial responsibility, the current research provides insights into the role of other variables in the pandemic. In terms of COVID-19-related variables, we found that perceived vulnerability in its various forms (perceived self-risk or close other-risk, age, COVID-19 impact on state) was consistently associated with both higher compliance with the measures against COVID-19 and higher acceptance of surveillance and privacy restrictions, converging with prior research showing that vulnerability increases conformity (Murray and Schaller, 2012; Wu and Chang, 2012) . In terms of demographic variables, compliance with measures as well as acceptance of surveillance and privacy restrictions were higher among democrats (vs. republicans) and among people living in urban (vs. rural) areas.
In terms of personality traits, we found that narcissism was associated with lower compliance, confirming the assumption that in this situation, too, narcissists might indeed behave selfishly and disregard the consequences of their behavior on others (Grover, 2020, April 18) . Moreover, a higher belief in free will was marginally associated with lower prosocial responsibility and lower prioritization of public health vis-a-vis individual freedoms. Extending prior findings that belief in free will is associated with a more punitive attitude toward wrongdoers (Baumeister and Brewer, 2012) , our findings suggest that belief in free will might also imply that everyone is responsible only for themselves and not for others. A higher value of freedom was also associated with lower acceptance of privacy restrictions. However, contrary to predictions, feeling helpless was unrelated with the willingness to make sacrifices in one's privacy or accept surveillance.
By investigating and controlling for a range of relevant predictors of people's willingness to accept a loss of individual rights, our research adds several novel but preliminary insights to the study of this timely phenomenon. Future research should follow up on the multiple leads this initial exploration provides. Most importantly, our research is the first to demonstrate a robust link between people's sense of prosocial responsibility and their willingness to sacrifice individual rights, in particular privacy. Future research is needed to corroborate this link in other cultural contexts and with measures that are not dependent on self-reports. Should results be as robust as we expect, then the prosocial appeals used to fight the pandemic might come at a potential long-term price to individual rights.
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.
MK conducted the study and analyzed the data in consultation with BK. MK wrote the manuscript. BK reviewed and edited the manuscript. Both authors conceptualized and designed the study and contributed to the article and approved the submitted version.
|
As of 1 September 2020, globally, more than 26 million cases of coronavirus disease 2019 (COVID -19) have been reported from 215 countries, territories, areas and 2 international surveys [1] . In the WHO (World Health Organization) European Region, there have been more than 3 million cases reported, representing 21% of the global burden [2] . Although the pathology of COVID- 19 is not yet fully understood, the infection may cause a wide spectrum of symptoms that varies from asymptomatic cases, mild symptoms of upper respiratory tract infections and life-threatening conditions. The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can affect respiratory, gastrointestinal and neurological systems, but there is growing evidence that other organs can also be impaired. According to a study from the China Center for Disease Control, 81% of people infected by SARS-CoV-2 had mild presentations while 14% had severe manifestations and about 5% had critical manifestations with needs for hospitalization and intensive care support [3] . Although the case fatality rate (CFR) has changed over time from the beginning of the outbreak and differs by location, an initial estimate suggests that CFR for SARS-CoV-2 is 2.58% [4] . However, it is difficult to calculate the CFR until the outbreak is over, and according to a systematic meta-analysis on the clinical features of COVID-19, in different settings the value may range from 3.75% to 13% [5] .
Host factors could play a key role in determining clinical presentation and outcome of the infection. In fact, typical features of symptomatic patients infected by SARS-CoV-2 are disruption of the endothelial barrier, dysfunction of the alveolar capillary oxygen transmission and impairment of oxygen diffusion capacity [6] .
A possible contribution of human gene polymorphisms involved in the host antiviral responses to SARS-CoV-2 has been recently postulated, and there is growing evidence that some gene polymorphisms could affect susceptibility and severity of the COVID-19 course [7] . Family clustering of severe cases has been reported worldwide, which supports further efforts in exploring these genetic determinants in order to be better prepared for future waves [8] [9] [10] .
In particular, there is growing evidence that some type III or lambda interferons (IFNLs) signaling are involved in the regulation of immunity and in the antiviral sustained response [11] . The homozygosity for IFNλ3-IFNλ4 variants can be associated with a reduction of viral clearance in children affected by acute respiratory infections [12] .
Another essential tool for managing and responding to the current COVID-19 pandemic is molecular testing. Clinical diagnosis of COVID-19 is confirmed by using the real-time reverse transcription polymerase chain reaction (rRT-PCR) to detect SARS-CoV-2 RNA [13] . The use of rRT-PCR as a molecular test allows to make qualitative (positive/negative results) diagnosis with, at least theoretically, a quantitative result that could be considered as a proxy of the viral load [14] . As a high viral load would require fewer rRT-PCR cycles, a low viral load would require many cycles to first record a significant fluorescent signal [15] . Data on the relationship between viral load and clinical outcome are still scarce, including viral load profiles at different times after diagnosis. A recently published review reports that the highest viral loads are detected at the time of symptom onset and generally decrease within one to three weeks after [16] . However, no clear evidence is available relating the infectivity to the presence of viral RNA detected with the rRT-PCR, as this does not indicate the presence of a live virus. Further findings have shown a correlation between cycle threshold levels and sample infectivity in a cell culture model [8] . In the present study, we investigated the relationship between cycle threshold values of quantitative rRT-PCR for SARS-CoV-2, presence of IFNL3/IFNL4 gene polymorphisms and the risk of severe outcomes (hospitalization, intensive care support or death) in COVID-19 patients.
This is an observational study carried out between 24 February 2020 and 8 April 2020 on 383 consecutive patients whose respiratory biological sample swabs had been sent to the referral Laboratory for COVID-19 Surveillance for Western Sicily located at University Hospital "P. Giaccone" of Palermo. All patients included were laboratory-confirmed SARS-CoV-2 cases that had a positive result of a reverse transcriptase real-time polymerase chain reaction (rtReal-Time PCR) of nasal, pharyngeal or nasopharyngeal swabs according to the Centers for Disease Control and Prevention protocol [17] . For each patient, a standardized form was filled including sociodemographic variables (age, sex and residency), whereas clinical outcomes (home isolation, hospitalization, admission to intensive care unit and death) were obtained by consulting the clinical profiles centrally provided by the Italian National Institute of Health (Istituto Superiore di Sanità, ISS) and, when available, by direct contact with the hospitals involved in the care of each recruited patient. Outcome categories have been considered mutually exclusive and the worst outcome was reported for each patient.
Each patient was monitored for at least 21 days after recruitment, and the final day of follow-up was 8 April 2020. Due to the observational design (linked to referral), no follow-up biological samples were available. For two patients the rtReal-Time PCR cycle threshold value was not available and they were thus excluded.
Before the swab sampling, an individual informed consent was obtained from each patient by the health care provider. An approval to conduct the study was required and obtained from the Ethical Committee of the A.O.U.P. "P. Giaccone" of Palermo, Italy. The research reported in this paper is in accordance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects.
SNP (single nucleotide polymorphism) genotyping was carried out on the extracted whole nucleic acids from nasal or pharyngeal swabs (QIAamp Viral RNA Mini Kit, QIAGEN) by the TaqMan genotyping allelic discrimination method (StepOne Plus Real Time PCR System, A.B. Foster City, CA, USA) using custom genotyping assays (Thermo Fisher Scientific, Waltham, Massachusetts MA, USA). Complete genotyping was not possible for all patients due to the scarce amount of DNA available from swabs. The genotyping call was done by 2.3 Applied Biosystem Software (QIAGEN, Hilden, Germany). Genotyping was conducted in a blinded fashion relative to patient characteristics. Before testing for SNPs, samples were anonymized, and a unique randomly generated identification code was assigned to each record and to the correspondent swab. Researchers performing genetic analyses were unable to identify patients at all stages, and no permanent record linking these data to patient IDs was produced.
The normality distribution of continuous variables was assessed by the Shapiro-Wilk test. Non-normal distributed continuous variables are presented as median and interquartile range (IQR), and categorical variables are expressed as the number of patients (percentage).
The Mann-Whitney rank sum test or the ANOVA test were used to compare non-parametric continuous variables between subgroups. Chi-square, Fisher exact tests and Fisher-Freeman-Halton tests were used for categorical variables as appropriate.
The multivariable logistic model was built to determine the association between RtReal-Time PCR cycle threshold values < 26 (the value of the I quartile) and homozygous mutant polymorphisms after adjustment for age. A multinomial regression model that was developed to test the relationship between clinical outcome (death/critical care or hospitalization vs. home isolation) and independent variables (sex, age and RtReal-Time PCR cycle threshold values) found statistically significantly associated to the outcomes at the univariate analysis. All statistical tests were two-tailed, and statistical significance was defined as p ≤ 0.05. Analyses were performed using R Software analysis 3.6.1 (Vienna, Austria) [18].
The general characteristics of the study patients are summarized in Table 1 . Overall, 381 patients with a median age of 58 years were evaluated. Death occurred in 32 (8.4%), whereas a large majority of subjects were isolated at home (235, 61.7%). The frequency of homozygous subjects for the variant polymorphism was 10.8% for TT in IFNL3 and 11.3% for DG in INFL4, being in high linkage disequilibrium. In Table 2 , median RtReal-Time PCR cycle threshold values were compared according to age, sex and the investigated polymorphisms. A logistic regression analysis was computed on independent variables associated with RtReal-Time PCR cycle threshold values < 26. In this multivariable analysis, age > 74 years and IFNL4 DG homozygosity maintained their significance (adj-OR = 1.16, 95% CI = 1.01-1.34 and adj-OR = 1.24, 95% CI = 1.09-1.40, respectively).
In Table 3 , the investigated variables are compared to clinical outcomes (death or intensive care support, hospitalization vs. home isolation). The variables found statistically significantly associated with clinical outcomes were included in a multinomial regression analysis reported in Table 4 . In particular, after adjusting for age and sex, a significantly lower risk of hospitalization was found for subjects with higher RtReal-Time PCR cycle threshold values (adj-OR = 0.95, 95% CI = 0.91, 0.99, p = 0.028).
A better understanding of the pathogenesis of SARS-Cov-2 induced direct and indirect damage in the host is needed in order to manage COVID-19. We explored two factors potentially related to diverse clinical outcomes, i.e., viral load estimated by a proxy (cycle threshold value of quantitative rRT-PCR for SARS-CoV-2 RNA) and single nucleotide polymorphisms (SNPs) in genes codifying for IFNλs (IFNL3 and IFNL4) that are major components of the innate immune response system.
The first intriguing finding of our study was that lower PCR Ct values and hence supposedly higher loads of SARS-Cov-2 were observed in older patients, in those with more severe outcomes and in the presence of DG homozygosity for INFL4. These associations should be considered with caution, since they have been found in two different multivariable models and, thus, a direct linkage between the three variables can be only inferred. However, a more in-depth reasoning could help to better clarify our hypotheses and these relations.
In regard to age, our study confirms previous, well-substantiated evidence that older age is linked to a worse outcome [19, 20] . It must be stressed that we did not assess the role of pre-existing comorbid conditions such as hypertension, cardiovascular disease and diabetes due to lack of data in many of these patients, hence it cannot be excluded that old age as a risk factor also acts as a proxy of such comorbidities. Older age in our cohort was significantly associated with a higher load of SARS-CoV-2, an association recently reported by other authors [21, 22] . A theoretical explanation for this association could be related to immunosenescence, which could impair innate and adaptive immune responses as age increases.
A low PCR Ct value in our group of patients was also correlated to a higher risk of admission to the hospital or to an intensive care unit and therefore of death. A correlation with death has been already observed by Chu et al. in relation with SARS-CoV-2 viral load [23] . In our analysis, after adjustment for potential confounding due to age and sex, only hospitalization maintained its significant correlation to viral load. It must be stressed that our estimate by proxy of SARS-Cov-2 load relates to the first detection of SARS-CoV-2 in patients, whereas death, in almost all deceased patients, occurred days or even weeks after first detection of the virus. This makes a direct causal relationship between the two variables quite unlikely. Conversely, after adjusting for age and sex, a 5% reduction of the risk of hospitalization was observed for each unit increase of PCR Ct values. Other authors have suggested a relationship between viral load, measured as PCR Ct, and clinical severity of disease [24] , although these findings require further investigation.
As is already well documented [25] [26] [27] , we found a close association between male sex and severity of COVID-19, both as hospitalization and death/intensive care. Some authors have hypothesized the role of potential sex-specific mechanisms modulating the course of the disease, such as hormone-regulated expression of some genes, as well as sex hormone-driven innate and adaptive immune responses and immune aging [25] . However, other concurring risk factors including comorbidities, gender-specific lifestyle or health behaviour could interact and increase this risk. Being a novel virus, little is known on the impact of host genetics on SARS-CoV-2 infection and clinical presentation. While much research has focused on viral receptors and several genetic associations between ACE2 genetic variants and the increased patient susceptibility to the infection, limited data exist regarding all the other genes that have been implicated in the pathology of the disease. It has been shown that the infection causes lymphopenia due to impairment of lymphopoiesis and increases T lymphocyte apoptosis, with a strong inflammatory response including a massive release of cytokines precipitating the alveolar damage and causing multiorgan failure [28] . In the majority of severe cases, the increase of serum concentrations of cytokines, including IL-2R, IL-6, TNF-α and IL-l, leads to a "cytokine storm" probably associated with severe cases and high mortality [28] . As mentioned earlier, considering the underlying genomic potential susceptibility to the infection, we have assessed host gene polymorphisms by using nucleic acid extracts generated from swabs during the diagnostic processing of COVID-19 to evaluate the host's genetic profile. Although no clear pathogenetic pathway can be defined, on the basis of our data, we have assessed that DG homozygous polymorphic status in INFL4 is significantly associated with a 16% increased risk of higher viral load (measured as PCR Ct value < 26).
This evidence is in keeping with the key role of IFNλs in the response to viral infection sustained by negative-and positive-sense RNA virus, double stranded RNA viruses and DNA viruses. As proof of the key role of type III IFNs in the regulation of immunity response, single nucleotide polymorphisms in IFNλ genes were strongly associated with outcomes of viral infection [10] . Between them, the rs1297860(C/T) located~3 kb upstream of IFNL3 and the frame shift variant rs368234815 (TT/∆G), which are in high linkage disequilibrium, represent the strongest host factor associated with viral clearance. It has been reported that the homozygosity for IFNλ3-IFNλ4 variants, overrepresented in African descent, is associated with a reduction of viral clearance in children affected by acute respiratory infections, including Rhinovirus and Coronavirus [11] . Very recently, it has also been reported that a subset of patients with life-threatening COVID-19 pneumonia were characterized by neutralizing auto Abs against type I IFNs, as well as patients with inborn errors of type I IFNs [29, 30] . This evidence could suggest the protective role of type I IFNs signaling against severe SARS-CoV-2 disease.
According to this evidence, the use of IFNλs as an antiviral drug has been suggested in COVID-19 patients or in subjects at high infection risk, and current randomized clinical trials with peg-IFN L1 are ongoing [31] .
It is conceivable that subjects with rs1297860 TT and rs368234815 DG/DG haplotype show a lower ability in virus clearance (low PCR Ct values), associated with the defective upregulation of inflammatory pathways (low risk for hospitalization due mainly to inflammation damage).
All of the previous findings should be considered with caution, since this study has several potential limitations. In particular, our sample size is relatively small to evaluate with appropriate strength, complex associations and interactions. However, a major limit of small sample sizes is the chance of type II error, thus, since we could have lost some associations, our results can be interpreted as exploratory and descriptive. Moreover, we used PCR Ct values, such as proxy of viral load, although this relationship is still not well standardized and quantified and, thus, could be affected by a low degree of precision and accuracy. Finally, we did not check for other potential confounding factors (such as comorbidities) that could be influencing the results. We also did not exclude the possibility that some hospitalizations were due to infection control reasons. For this reason, we have considered hospitalized subjects (those at higher misclassification risk) with caution by performing a multinomial analysis where home isolation was the reference. In this way, if results obtained for hospitalized subjects should also be biased, both home isolation and death/critical care would be poorly influenced by this bias.
Despite possible limitations, we are confident that our findings contribute to the search for some pieces of this very complex puzzle and suggest an interesting link between SARS-CoV-2 load, estimated by RtReal-Time PCR cycle threshold value at presentation, severity of COVID-19 and specific IFNλ polymorphisms affecting the ability of the host to modulate viral infection in the early stages, thus acting as regulators of the ultimate outcome of COVID-19.
|
Innerhalb kürzester Zeit hat der Corona-Schock die deutsche Wirtschaft mit voller Wucht getroffen. Zwar wird mit immensem fi nanziellen Aufwand alles versucht, um die bestehenden Jobs und Betriebe zu retten. Auch verfügt Deutschland nach Jahren des Beschäftigungsaufschwungs über einen grundsätzlich sehr robusten Arbeitsmarkt. Aber angesichts der Dimension des wirtschaftlichen Schocks wäre es illusorisch zu glauben, die Krise ließe sich ohne eine deutliche Zunahme der Arbeitslosigkeit durchstehen. Wenn die Arbeitslosigkeit nach der Krise wieder abnimmt, wäre der Schock zwar schmerzhaft, aber heilbar. Nur: Das ist kein Selbstläufer.
Auch in früheren Rezessionen war die Arbeitslosigkeit wegen des wegbrechenden Arbeitskräftebedarfs gestiegen. Es entstand also eine konjunkturelle Arbeitslosigkeit, die wieder verschwinden sollte, wenn die Konjunktur anzieht. Über Jahrzehnte hinweg war das in Deutschland aber nicht der Fall (vgl. Abbildung 1). Klinger und Weber (2016) zufolge haben sich im Laufe eines Jahres in Rezessionen fast zwei Drittel der konjunkturellen Arbeitslosigkeit in persistente, strukturelle Arbeitslosigkeit gewandelt. Das heißt, Arbeitslosigkeit, die einer vorübergehenden Konjunkturschwäche geschuldet war, hat sich über die Zeit verfestigt. Dafür sind verschiedene Gründe relevant. So kann Arbeitslosigkeit für Arbeitgeber einen negativen Signaleffekt bezüglich der Fähigkeiten und der Motivation des Arbeitnehmers haben. Je länger die Arbeitslosigkeit anhält, desto stärker kann sich auch tatsächlich Demotivation einstellen. Und über die Arbeitslosigkeitsphase hinweg können Arbeitserfahrung und Qualifi kationen veralten. Letzteres spielte gerade während des Anstiegs der Arbeitslosigkeit seit den 1970er Jahren eine Rolle. Mit der Automatisierung klassischer Fabrikarbeit, der Computerisierung und der Etablierung des Internets hatten sich Anforderungen im Erwerbsleben deutlich gewandelt. Die Arbeitslosenquote der Niedrigqualifi zierten stieg zeitweise auf mehr als 25 % (vgl. Abbildung 1). Fielen die Jobs in einer Rezession weg, kamen sie danach nicht einfach wieder. Somit treten die Arbeitsmarktwirkungen des technologisch-strukturellen Wandels ruckartig auf, obwohl der Wandel selbst durchaus kontinuierlich verlaufen kann.
Die dunkelblaue Linie in Abbildung 2 zeigt den Verlauf des qualifi kationsverzerrten technologischen Wandels in Deutschland . Dieser ist dadurch gekennzeichnet, dass er die Produktivität Hochqualifi zierter im Vergleich zu Niedrigqualifi zierten begünstigt. Automatisierung kann Technologien schaffen, die komplementär zu Hochqualifi zierten sind und diesen zusätzliche Mittel an die Hand geben. Zudem können Innovationen substitutiv für Niedrigqualifi zierte sein, sodass deren Jobs unter Druck geraten. Dadurch kommt es zu einem (qualifi katorischen) Mismatch im Arbeitsmarkt, der Anpassungsbedarf und Reibung erzeugt und im ungünstigen Fall Arbeitslosigkeit verfestigt. Über Jahrzehnte bis in die 2000er Jahre hinein trat ein starker, qualifi kationsverzerrter technologischer Wandel auf. Entsprechend war die Arbeitslosenquote der Niedrigqualifi zierten über einige Rezessionen weitaus stärker gestiegen als diejenige der Personen mit berufl ichem oder akademischem Abschluss (vgl. Abbildung 1).
Anders war es hingegen in der Großen Rezession 2009, als es zu keiner Verfestigung kam. Dafür dürften verschiedene Gründe relevant sein. Klinger und Weber (2016) weisen auf die vorhergehenden Arbeitsmarktreformen hin, und Deutschland ging damals mit einem starken Aufwärtstrend am Arbeitsmarkt in die Krise (Weber, 2015) . Dabei setzten sich günstige Trends wie die steigende Matchingeffi zienz in der Krise fort . Ein wichtiger Grund ist allerdings auch, dass in der Zeit der Weltfi nanzkrise die technologischen Umbrüche, die niedrige Qualifi kationen unter Druck setzen, deutlich schwächer ausgeprägt waren als in den Jahrzehnten zuvor (vgl. Abbildung 2, dunkelblaue Linie). Die Wellen der Computerisierung und des Internets waren bereits ausgelaufen, die von Industrie 4.0 und künstlicher Intelligenz aber noch nicht angelaufen . Auch deshalb konnte wohl die Arbeitslosigkeit der Niedrigqualifi zierten nach der Rezession zügig sogar unter das Vorkrisenniveau fallen.
Welche Situation ist heute und speziell für die Corona-Krise relevant? Die dunkelblaue Linie in Abbildung 2 zeigt, dass der qualifi kationsverzerrte technologische Wandel bis zuletzt fl ach blieb. Einen derart starken Arbeitslosigkeitsabbau, wie er bis 2008 zu verzeichnen war, gab es trotz der relativ günstigen Entwicklung vor der aktuellen Corona-Krise aber nicht. Zudem ist der Arbeitsmarkt heute anders betroffen als in früheren Jahrzehnten. Das bisher betrachtete Maß des qualifi kationsverzerrten technologischen Wandels bildet die Begünstigung von höheren und mittleren gegenüber niedrigeren Qualifi kationen ab. Eine solche Begünstigung trat in den vergangenen Jahren nicht mehr auf. Wir haben nun aber zusätzlich dasselbe Maß für die Begünstigung hoher gegenüber niedrigen und mittleren Qualifi kationen berechnet. Die hellblaue Linie in Abbildung 2 zeigt, dass diese zwar über die Jahrzehnte mit etwas geringerer Steigung voranschritt als das Maß ausschließlich zulasten der Niedrigqualifi zierten, sich dafür aber zuletzt unverändert fortsetzte. Gerade berufl iche Qualifi kationen sind in den vergangenen Jahren also ins Hintertreffen geraten, der Wandel begünstigte die Hochqualifi zierten.
Die heutige Situation kann also mit der Zeit der Verfestigung von Arbeitslosigkeit in Deutschland auf einer anderen Ebene durchaus vergleichbar sein: Die Entwicklung hin zu einer digitalen Wirtschaft 4.0 und die ökologische Transformation sind in vollem Gange. Die Jobs, die den deutschen Arbeitsmarktaufschwung ausmachten, können morgen ganz anders aussehen. In Wirtschaft-4.0-Szenarioanalysen wird ein deutlicher Trend zur Höherqualifi zierung und starken Änderungen im mittleren Qualifi kationsbereich erwartet (Wolter et al., 2016) . Die Hauptrisiken liegen möglicherweise auf der mittleren Qualifi kationsebene. Wenn hier Arbeitsplätze in der Rezession verschwinden, ist die Wahrscheinlichkeit hoch, dass sie nach der Rezession nicht wieder in derselben Form entstehen. Corona führt zu einer transformativen Rezession. Im Zentrum stehen trotz der aktuell akuten Betroffenheit dabei nicht unbedingt Einfacharbeitsplätze, sondern die Fachkraftebene -das Kernstück des deutschen Bildungssystems. Wir müssen das Risiko ernst nehmen, dass sich die in der Corona-Krise entstehende Arbeitslosigkeit verfestigen könnte. Derartige dauerhafte Schäden würden zu einer immensen sozialen und wirtschaftlichen Belastung führen.
In Deutschland werden normalerweise pro Jahr etliche Millionen sozialversicherungspfl ichtige Beschäftigungs- 1975 1980 1985 1990 1995 2000 2005 2010 2015 % verhältnisse neu begonnen -wenn diese ausbleiben, ist eine langwierige Arbeitslosigkeit vorprogrammiert. Genauso zeigt sich, dass die Arbeits marktintegration von Berufseinsteigern, die früh arbeitslos werden, dauerhaften Schaden nimmt (Möller und Umkehrer, 2015) . Zudem drohen viele Menschen, die nach einer Entlassung bzw. dem Auslaufen einer Befristung normalerweise sofort den nächsten Job annehmen, jetzt in der Arbeitslosigkeit zu landen. Nach Ergebnissen der IAB-Stellenerhebung (Bossler et al., 2018)
|
The respiratory infections in children are classified as upper respiratory infections and lower respiratory infections (LRI) based on the site of infection. In LRI, community-acquired pneumonia (CAP) is the major cause of respiratory morbidity and mortality in children, which can be defined as symptoms of pneumonia caused by community acquired infection without predisposing factors in previous healthy children. CAP is one of the most common diseases in children, and also second leading cause of death for children in developing countries. In addition, CAP is one of the most common causes of hospitalization among developed countries. The incidence of CAP is 10 to 40/10,000 for children under 5 years of age and 11 to 16/10,000 for children between the age of 5 and 14 years. [1, 2] Although the clinical manifestations of fever and respiratory symptoms are recommended for the diagnosis of CAP, chest X-ray is still the "golden standard" for the diagnosis and severity assessment. Since it has been suggested that chest X-ray should not be used as routine examination of CAP in children, [3] it is necessary to find new serum markers to replace chest X-ray in order to determine the lung involvement and stratify the children, and further decide whether it is necessary to perform radiological examination.
Prealbumin (PA) is a nonspecific host defense effector with a half-life of 1.9 days; thus, the serum level of PA can be rapidly reduced during acute infection, especially which would be more obvious with bacterial infections. [4] Gao et al [5] found that PA level was decreased during the bacterial infection in children, whereas there were no obvious reduction in viral infection and control groups. Thus, as a negative acute phase protein, PA can be used to identify bacterial or viral infections in children with acute infectious disease. However, the role of PA in children with CAP is still unclear, especially its roles in severity assessment of CAP. Therefore, our study aims to investigate the role of PA in the diagnosis and severity assessment of children with CAP in order to guide clinical decision-making.
The study was approved by the Ethics Committee of the Soochow University. All patients and their family members signed the informed consent form. Patients' medical records were anonymous. From May 2014 to May 2016, 174 cases of hospitalized children with CAP aged from 3 months to 12.4 years (average age was 4.1 years) were retrospectively analyzed. CAP was defined as the presence of signs and symptoms of pneumonia (fever and respiratory symptoms) and pulmonary condensation on chest radiography in a previously healthy child caused by an infection that was acquired outside the hospital. [6] Children who had received antibiotic treatments for more than 48 hours before admission or had been suffering from an underlying chronic respiratory disease were excluded from the study. A total of 33 healthy children were selected as the control group, aged from 8 months to 9.8 years (average age was 4.7 years). A total of 174 cases of CAP patients were further divided into 4 groups according to the admission time. A total of 47 cases were in spring (March-May), 64 cases were in summer (June-August), 32 cases were in autumn (September-November), and 31 cases were in winter (December-February). According to the scope of X-ray pulmonary infiltration, presence of hypoxemia, and presence of pulmonary or extrapulmonary complications, [7] CAP group was further divided into mild group (133 cases) and severe group (22 cases), 19 cases without performing chest X-ray were not included in the study. The vital signs including body temperature, heart rate, and respiratory rate (RR) were recorded.
In addition, inflammation markers were also recorded, which included white blood cell count (WBC) (5-12 Â 10 9 /L), percentage of neutrophils (40%-75%), neutrophil count (2-7 Â 10 9 /L), percentage of monocytes (3%-10%), procalcitonin (PCT), Creactive protein (CRP), PA, and erythrocyte sedimentation rate (ESR). Routine blood test was performed by Sysmex XN9000 (Hyogo, Japan), PCT was tested by Roche cobas8000 (Indianapolis, IN, USA) system (reference value is 0.021-0.500 ng/mL), CRP and PA were tested by BECKMAN COULTER AU5800 (Brea, CA, USA) (reference value are 0-10.0 and 170-420 mg/L, respectively), and ESR was analyzed by Alifax TEST1 (Padova, Italy) analyzer (reference range: male 21 mm/h, female 26 mm/ h). Indirect immunofluorescence approach was employed to detect the immunoglobulin M (IgM) antibodies against 9 common respiratory pathogens including Legionella pneumophila, Mycoplasma pneumoniae (MP), Coxiella burnetii, Chlamydia pneumoniae, adenovirus, respiratory syncytial virus, influenza type A virus, influenza type B virus, and parainfluenza virus types 1, 2, and 3. The detection was performed by using VIRCELL IFA (Granada, Spain) kit according to the manufacture's instruction. The fluorescence was observed by using fluorescence microscope EUROSTARIII PLUS (Lubeck, Germany).
Statistical analysis was performed using the statistical package for the social scientists (SPSS version 13, Chicago, IL, USA). Skewed distribution data were expressed as 50th percentile with 25th and 75th percentile; quantitative data were expressed as numbers or percentages. Two sets of non-normally distributed data were analyzed by Mann-Whitney U test. Proportions were compared with x 2 test. The diagnostic efficiency of different markers was compared by using receiver operating characteristic (ROC) curve and the area under the ROC curve. Multiple factors analysis was performed using multivariate logistic regression analysis and regression coefficient, and odds ratios were calculated. P < 0.05 was considered statistically significant.
There was no significant difference in age and the sex ratio between the CAP and the control groups By comparison of inflammation markers between CAP and control groups (Fig. 1 ), we found that the value of WBC, percentage of neutrophils, neutrophil count, and ESR in CAP group were 10.13 (7.22, 15 .34) Â 10 9 /L, 64.4 (49.9, 76.9)%, 6.12 (3.66, 10.16) Â 10 9 /L, and 14 (6, 25) mm/h, respectively, whereas the 4 indexes in control group were 6.30 (4.48, 7.66) Â 10 9 /L, 49.7 (45.1, 56.5)%, 3.22 (2.14, 4.02) Â 10 9 /L, and 6 (4, 17) mm/h, indicated that the 4 inflammation markers in CAP group were significant higher than values in control group (all P < 0.05). The PA of CAP group was significantly lower than that of the control group (134 [111, 162]vs 159 [145, 170] mg/L,
The ROC curve of WBC, percentage of neutrophils, neutrophil count, ESR, and PA for the diagnosis of CAP was shown in Figs. 2 and 3; the parameters obtained from ROC curve were shown in Table 1 . Multivariate logistic regression analyses using multiple variables (WBC, percentage of neutrophils, neutrophil count, ESR, and PA) which were statistically significant based on univariate analysis was used to examine the impact of multiple independent indicators on the diagnosis of CAP as a dependent variable. PA was the only significant protective factor (odds ratio: 0.974; 95% confidence interval [CI]: 0.956-0.993; P = 0.008) ( Table 2) .
Comparison of general status between mild CAP and severe CAP ( Table 3 ). The IgM antibody test for 155 patients with CAP showed that 38 patients with MP positive, 108 patients with MP negative, and 9 patients who declined to detect. There was no significant difference for MP positive rate between 2 groups (25.6% vs 28.6%, P = 0.774). By comparison of general status, it was shown that the RR in severe CAP group was significant higher than mild CAP group at the admission (P < 0.05). If considering the age, the criteria for tachypnea are as follows: ≥60 bpm (<2 months), ≥50 bpm (2 months-1 year old), ≥40 bpm (1-5 years old), ≥30 bpm (old than age 5 years). There was significant higher rate of tachypnea for severe CAP group (13.6%) than mild CAP group (1.5%) (x 2 = 5.439, P = 0.020). In addition, there were only 2 cases of pleural effusion and 1 case of myocardial damage in the severe CAP group.
By comparison of the laboratory indexes in different CAP group (Fig. 4 ), we found that PA level was 104 (74, 136) mg/L in severe CAP group, which was significant lower than in mild CAP group (136 [118, 163] mg/L) (P = 0.001). The ROC curve of PA for the assessment of CAP severity was shown in Fig. 5 , the area under the curve was 0.728, P = 0.001, 95% CI was 0.604 to 0.853. The best cutoff value was 125 mg/L, the sensitivity was 0.714, and the specificity was 0.703.
The common pathogens causing CAP include bacteria, viruses, mycoplasma, chlamydia, and so on. In recent years, MP has become most common cause of CAP in children. [8, 9] Previous studies have shown that up to 40% of CAP has MP infection, and 18% of them required hospitalization. [10] Recently, Shu et al [11] analyzed 1155 cases of children with CAP in Shanghai, and the detection rate of MP infection was 43.64%, whereas bacterial The incidence of MP infection is related to the age and immune status of the patient, and the recurrent infections are rare. Infants and young children under age 3 years often manifest as mild or subclinical infection. The peak age with MP infection is preschool and school-age children, 7% to 30% of patients with CAP between ages 3 and 15 years are caused by MP infection. [12] Shu et al [11] found that children between ages 6 and 14 years have a high detection rate of MP infection (77.4%), and the lowest detection rate was from children under age 1 year (11.2%). In our study, the detection rate of MP was 27.4%, which was lower than 43.64% that reported by Shu, the reason might be attributed to the low average age of children in this study. MP infections can occur throughout the year, and the peak season was different in different regions. The epidemic seasons in north of China is winter but is summer and autumn in the south of China. [12] There was no seasonal difference for positive rate of MP infections in this study, which might be related to the small sample size. In addition, we found there was no significant correlation between the severity of CAP in children and the presence of MP infection; it was possible that bacterial pathogens were the major cause of severe CAP. [13]
Usually, in developed countries CAP diagnosis is mostly clinical and is confirmed by radiographic finding of consolidation. The valuable laboratory tools are also needed in the management of CAP in children, which can offer the useful clinical information on determination of appropriate treatment and courses with antibiotics based on the detected etiology agent, as well as the prognosis of the disease. PCT, a protein containing 116 amino acids, is normally produced by neuroendocrine cells in the thyroid and lungs at a very low rate and is undetectable in serum. [14] Inflammatory and infectious injuries stimulate overexpression of the CALC1 gene consequently increasing serum PCT. Under these pathologic conditions, synthesis and secretion of PCT become ubiquitous. [15] CRP is an acute phase reaction protein synthesized in the liver and is considered as a good index for early diagnosis of inflammation. [16] The usefulness of PCT and CRP in the management of pediatric CAP has been carefully studied and compared. Overall, the latest data showed that PCT was the best diagnostic marker compared to CRP, especially for the detection of pneumococcal infection. [17, 18] However, there was no significant increase of PCT and CRP in our study, which might be attributed to a low rate of bacterial infection. Actually in CAP children infected with virus and mycoplasma, PCT and CRP are not elevated or only mildly elevated. [19, 20] WBC, neutrophil percentage, neutrophil count, and ESR are the traditional indicators for screening the bacterial infections with CAP in children, but recent studies have demonstrated that the above indicators were not specific and sensitive for the diagnosis of bacterial or viral infection. [21] In this study, the above indicators were increased in the patients with CAP, but the sensitivity for diagnosis of CAP was low according to the traditional cutoff value (all P < 0.5); thus, the diagnostic value of CAP was limited. The possible reason is that the body's response to infection is poor if the child's immune system is low; thus, the above indicators might be normal. [22] PA is a carrier protein synthesized in the liver. Because PA can eliminate the toxic metabolites released during the process of infection and is gradually consumed; thus, it is a nonspecific host defense substance. Hrnciarikova et al [23] found that the elevation of serum CRP was correlated with the decrease of PA in old people with infections, suggesting that PA has the similar clinical significance with CRP. Shao et al [24] found that the PA levels were decreased in CAP group that infected with different pathogens, whereas PA reduction was more significant in bacterial infection group. Similar results were found in this study, and the sensitivity for diagnosis of CAP with traditional cutoff value was 0.847, which was significant higher than traditional inflammatory indicators. Moreover, it was found that PA was an independent protective factor for CAP in children based on multivariate analysis. It was proposed that the combination of PA with the traditional inflammatory indicators may make up the deficiency of their low sensitivity and improve the diagnostic efficacy of CAP.
The children with CAP may have fever (Axillary temperature >38.5°C), coughing, wheezing, rapid breathing, shortness of breath, chest wall inspiratory depression, breath holding, chest pain, headache, abdominal pain, or other symptoms. [6] Smyth et al [25] found that RR was helpful to the severity of pneumonia in children under 1 year of age. The correlation sensitivity of RR > 70/min and hypoxia was 63%, and the specificity was 89%. We also showed that RR was higher in severe CAP group than in mild CAP group. Because rate of tachypnea was still higher in severe CAP than in mild CAP group after considering the age factor; thus, tachypnea was a good sign of CAP severity.
The assessment of pediatric CAP severity defined by the extent of consolidation on chest X-rays and the presence of pleural effusion. [7] Because of this definition, there is no scoring system available for the severity of children CAP. Although X-rays are important for the diagnosis, it should not be used as routine method. Lee et al [26] found that PCT has a higher sensitivity and specificity than CRP in the differential diagnosis of lobar and bronchial pneumonia. Agnello et al [27] reported recently that CRP is better than PCT, neutrophil count, and WBC for the assessment of CAP severity. Our study did not find the advantages of traditional inflammation markers for the differential diagnosis of CAP severity, which may be attributed to the reduction of bacterial infections in CAP and the complexity of pathogens.
Liao et al [28] compared the PA levels from 54 patients with severe acute respiratory syndrome (SARS), 20 patients with pneumonia, and 30 healthy controls and discovered that the decreased levels of PA in SARS group > the pneumonia group > the control group, suggesting that the extent of PA reduction was correlated with the degree of severity of pneumonia. In the present study, the PA in children with severe CAP was Table 3 Comparison of general status between mild CAP and severe CAP. significantly lower than in children with mild CAP, which was consistent with the findings of the aforementioned adult study. ROC analysis showed that PA had a high sensitivity and specificity in assessing the severity of CAP when the cutoff value was 125 mg/L, which was lower than the traditional cutoff value (<170 mg/L), indicating that PA can be used as an indicator for severity assessment of pediatric CAP.
In conclusion, PA was highly sensitive to the diagnosis of CAP in children. Combined with traditional inflammatory markers such as WBC, percentage of neutrophils, neutrophil count, and ESR, PA may improve the diagnostic efficacy of CAP. In addition, PA was an independent protective factor for CAP in children. The reduction of PA was correlated with the severity of CAP, which can be used as a reference index to assess the severity of CAP except for X-ray, so as to guide clinical decision-making. Of course, there were some limitations in our study, such as the small sample size, low severe CAP ratio, and the mild degree of inflammation, which need to be further improved.
|
Infectious bronchitis (IB) is an acute viral respiratory disease of chickens and results in a significant economic loss to commercial chicken industries in many countries of the world. The disease is characterized by respiratory signs including gasping, coughing, sneezing, tracheal rales, and nasal discharge [1] . All ages of chickens are susceptible to IBV infection, but the clinical signs are more severe in young chickens [2] . In hens, respiratory distress and a decrease in egg production have been reported [3] . Some strains of IBV can cause acute nephritis and urolithiasis associated with a high mortality of infected chickens [4, 5] . In addition, IBV has also been reported to cause proventriculitis [6] . Furthermore, the disease is a risk factor for secondary bacterial infections resulting in an even higher morbidity and mortality rate [5] .
Infectious bronchitis virus (IBV), the causative agent of IB, is a coronavirus. The genome of IBV consists of positive sense single stranded RNA, approximately 27.6 kilobases in length [7] , that is encoded for four structural proteins: nucleocapsid (N) protein, envelope (E) protein, membrane (M) glycoprotein, and spike (S) glycoprotein [8] . The S glycoprotein is post-translationally cleaved into the S1 and S2 subunits [9] . The S1 subunit, located on the outside of virion, is responsible for the fusion between the virus envelope and the cell membrane of the host [7] . It contains virus neutralization and serotype-specific epitopes that are formed by amino acid within the defined hypervariable region (HVR); therefore, the molecular characterization of IBV is based on an analysis of the S1 gene [10] .
The continuous emergence of new serotypes or variant strains of IBV has been reported world-wide [3, 4, [11] [12] [13] .
The events are thought to be generated by mutation processes including deletion and insertion of the nucleotides within IBV genome; moreover, the evolution by genetic recombination has also been reported [12] . The new serotypes or variant strains of IBV can cause disease in vaccinated chickens [3, 4, 11] . Therefore, these emergences are of great concern to poultry producers.
In Thailand, the outbreak of IB was initially reported during 1953-1954 [14] . Since then, IB has continued to be an economically important disease in the Thai poultry industry and can be found all over the country [15, 16] . Recently, we characterized IBV isolated in Thailand from January to June 2008 by analysis of the HVR of S1 genes and found that the Thai IBV isolates were divided into two groups, QX-like IBV and that unique to the Thai strain [17] . In this study, the objective was to determine the genetic variation exhibited within Thai IBV isolates in 2008-2009 by analysis of the complete S1 genes and comparing them with previously published strains.
Fifteen Thai IBV isolates used in this study are listed in Table 1 . All of them were isolated from commercial poultry farms in Thailand which had been experiencing of respiratory disease between January 2008 and October 2009. All flocks had been vaccinated against IB with commercial live attenuated H120.
Before the virus isolation, all of the samples were screened and had positive results with nested-PCR for the presence of IBV described by Pohuang et al. [17] . In brief, the trachea and lung samples were taken from pools of chickens from the same farm. The samples were prepared as 10% w/v suspensions in phosphate-buffered saline (pH 7.4) and centrifuged at 1,8009g for 10 min. The supernatants were then collected for RNA extraction using Viral Nucleic Acid Extraction Kit (Real Biotech, Taiwan) following the manufacturer's instructions. The extracted RNA was subjected to nested-PCR using the primer sets and reaction conditions described previously [17] . The supernatants of IBV-positive samples were inoculated into 9-11-day-old embryonated chicken eggs. Each egg received 0.2 ml of the supernatant. The inoculated eggs were incubated at 37°C and candled daily. After 96 h post-inoculation, allantoic fluids were harvested. A further blind serial passage was performed in a similar way. All of the allantoic fluids were harvested and stored at -70°C. Then, the allantoic fluids were used for RNA extraction as described above.
The primer sets used in this study were newly designed to amplify the full length of the S1 gene. Two sets of primers were used and the primer sequences were as follows: the primers used for amplification of the 5 0 half of the S1 gene were 5 0 GCCAGTTGTTAATTTGAAAAC 3 0 and 5 0 TAATAACCACTCTGAGCTGT 3 0 , and the primers used for amplification of the 3 0 half of S1 gene were 5 0 ACTGGCAATTTTTCAGATGG 3 0 and 5 0 AACTGT TAGGTATGAGCACA 3 0 . The reverse transcriptionpolymerase chain reaction (RT-PCR) was accomplished in one step by using the AccessQuick RT-PCR System (Promega, Madison, WI, USA). RT was performed at 48°C for 45 min and heating at 94°C for 5 min. PCR was then performed by 35 cycles of denaturation at 94°C for 60 s, annealing at 54°C for 30 s, extension at 72°C for 60 s, and final extension at 72°C for 10 min. The PCR product was analyzed by electrophoresis on 1.2% agarose gel, followed by staining with ethidium bromide (0.5 lg/ml) and then was visualized by using an ultraviolet transilluminator.
The RT-PCR products were cut from the gel and purified using the Wizard SV Gel and PCR Clean-Up system (Promega, Madison, WI, USA) as the protocols suggested by the manufacturer. The purified RT-PCR products were sequenced in both of the forward and the reverse direction by commercial service (First Base, Selangor, Malaysia).
The nucleotide sequences of the S1 gene from the ATG start site to the cleavage recognition site of fifteen Thai IBV isolates were assembled, aligned, and compared with published IBV sequences deposited in the GenBank database. The first time they were compared with published IBV sequences deposited in the GenBank database using a BLAST search via the National Center of Biotechnology Information (http:// www.ncbi.nlm.nih.gov/BLAST/). Sequence identities by BLAST analysis were included in the alignment and phylogenetic construction. The multiple sequence alignments and determination of the nucleotide and amino acid identities were performed using BioEdit version 7.0.5.2 [18] . A phylogenetic tree of the nucleotide sequences was constructed with the neighbor-joining method using MEGA version 4 [19] . The bootstrap values were determined from 1000 replicates of the original data. The S1 gene sequences of the fifteen IBV isolates were submitted to the GenBank database ( Table 1 ). The other S1 gene sequences from the GenBank database which were used for comparison or phylogenetic analysis in this study included M41 (AY561711), Ma5
Putative recombinant sequence and its parental strains were identified with Simplot version 3.5.1 [20] . The nucleotide identity was performed by using Kimura (2-parameter) method with a transition-transversion ratio of 2. The window width and the step size were 200 and 20 bp, respectively. Bootscan analysis was also carried out employing subprogram embedded in SimPlot, using the signals of 70% or more of the observed permuted trees for indication of the potential recombination events [21] . Recombination breakpoints were analyzed by maximization of v 2 using the program Findsites included in the SimPlot [22] .
A phylogenetic tree was constructed using the nucleotide sequences of the S1 genes (position from ATG start site to the cleavage recognition site) of the field Thai IBV isolates and the GenBank deposited sequences. Fifteen Thai IBV isolates were separated into three distinct groups (Fig. 1 Recombination in the S1 gene
The recombination events in the S1 gene sequences of Thai IBV were analyzed by using the Simplot analysis. In the similarity plot, the strains were considered as recombinants if any crossover event took place between two putative parental strains. By this analysis, the recombination events were found in groups I and II, but not in group III Thai IBV. Overall, groups I and II were clustered into a different group based on phylogenetic analysis of the S1 gene ( Fig. 1) . However, when the similarity plot of group I (represented by isolate THA90151) was performed, the 3 0 -terminus of the S1 gene was found to be similar with each other (Fig. 2) . Interestingly, The 5 0 -terminus of the group I was similar to isolate THA001 which was unique to Thailand, isolated in 1998 (Fig. 2) . The positions of recombination breakpoints were estimated at nt 679-699 (the maximization of v 2 = 146.4). The similarity plot of group II (represented by isolate THA80151) showed that the 5 0 -terminus of the S1 gene was found to be similar to Chinese QXIBV but a short region at the 3 0 -terminus was similar to the Chinese strain JX/99/01 (Fig. 3) . The positions of recombination breakpoints were estimated at nt 1531-1537 (the maximization of v 2 = 71.0).
Previously, we demonstrated that IBVs isolated in Thailand between January and June 2008 were divided into two distinct genotypic groups based on an analysis of the HVR of the S1 genes [17] . After that, we continued to collect the IBV samples until October 2009. Furthermore, the complete S1 gene sequences were determined in order to obtain more information about the evolution of Thai IBV. Herein, we found that fifteen Thai IBV isolates between 2008 and 2009 were divided into three distinct genotypic groups. When compared with our previous report [17] , the group I Thai IBV was clustered into a different group but still unique to Thailand, the group II was clustered in the group of QX-like IBV and the group III reported only in this study, was clustered into the Massachusetts type. The result suggested that at least three groups of IBV are circulating at the present time in Thailand. Although the basic amino acid residues of the cleavage recognition site of IBV did not correlate with cleavability, Fig. 2 Similarity plot of the S1 gene of the group I Thai IBV (represented by THA90151). Isolate THA80151 (filled triangle) and isolate THA001 (filled square) were used as putative parental strains when isolate THA90151 was queried. M41 (no fill) was used as an outlier sequence host cell range and virulence as orthomyxoviruses and paramyxoviruses, these sequences correlated with the geographic distribution of IBV [23] . Two spike glycoprotein cleavage recognition sites were found in the Thai IBV isolates. The cleavage recognition site sequence, Arg-Arg-His-Arg-Arg, was found in the groups I and II Thai IBV. This cleavage recognition site sequence had been previously reported in Chinese IBV isolates [24] . Another cleavage recognition site sequence, Arg-Arg-Phe-Arg-Arg, was found in the group III Thai IBV. This cleavage recognition site sequence has been found in many countries [23, 24] . Based on these facts, the groups I and II Thai IBV isolates have a close relationship with Chinese IBV isolates.
Overall, the group I Thai IBV appeared to be different from previous published strains by phylogenetic analysis. Interestingly, when the potential of recombination event was analyzed, the 5 0 -terminus of the S1 gene were similar to isolate THA001 which was isolated in Thailand in 1998 [18] but the remaining sequences were similar to the group II Thai IBV. Surprisingly, when the S1 genes of the group II Thai IBV were analyzed, recombination event was also observed in the nucleotide sequences near the cleavage recognition site. Although the group II Thai IBV appeared to be similar to QXIBV, a region near the cleavage recognition site was found to be similar to JX/99/01 isolated China in 1999 [24] . Our primary concern was that the possible viral recombination resulting from the co-infection of heterologous field strains in the chicken flocks. The recombination may be subsequently occurring with co-infection with QXIBV and JX/99/01 resulting for the occurrence of the group II Thai IBV. After that, the co-infection and exchange of genetic information between the group II Thai IBV and isolate THA001 resulted in the occurrence of the group I Thai IBV. As reported by Wang et al. [25] , the evidence of natural recombination could occur within the S1 gene of IBV field isolates. The natural recombination events observed here indicated that the S1 gene of IBV was the potential site for recombination and that the exchange of genetic information could occur in more than one region of the gene.
The recombination event is though to occur by the switching of the polymerase from one template to another during the genomic synthesis [26] . Specifically, an intergenic (IG) consensus sequences (CTGAACAA or CTTAACAA) serve as recombination ''hot spots'' [27] . Sometime, the presence of homologous nucleotide sequence regions between the strains may serve as a potential recombination junction or cross-over site [25] . Although the IG consensus sequences were not observed in this study we found the highly conserved nucleotide sequences present around the recombination breakpoint regions in the putative parental strains (data not shown). The result suggested that the homologous nucleotide sequence regions between the strains may play a role as a cross-over site of our isolates.
QXIBV was first described and identified in China [28] , after that this IBV genotype spread and became one of the most prominent genotypes in many countries [29] [30] [31] . Although the complete S1 gene of the group II Thai IBV was of 95.5-96.0% nucleotide identity to Chinese QXIBV, the nucleotide sequences at 3 0 -terminus near the cleavage recognition site were different from the Chinese QXIBV. We found that this region was closely related to the Chinese isolate JX/99/01. Interestingly, this change was not found when the comparisons of the S1 gene among QXIBV reported in others countries were analyzed (data not shown). These findings suggest that the group II Thai IBV had gone through evident evolution change in Thailand. The isolates in the group III were clustered into the Massachusetts type. The Massachusetts type was also isolated in many countries world-wide including Asian countries such as China, Japan, and South Korea [24] . Some of the Massachusetts type isolated here may be a field challenge which comes from the point mutation of vaccine strains because they had 97.5-99.9% nucleotide and 93.2-99.7% amino acid identity to the vaccine strains (data not shown). It has been shown that the S1 gene of the 4/91 pathogenic virus differs only 0.6% from the vaccine strain but it cannot conclude that the small number of sequence differences are responsible for the attenuation of pathogenicity [32] . However, the possibility that some of them were re-isolation of vaccine strains could not be excluded due to the 100% identity with the Massachusetts type, the vaccine strain used in Thailand. As indicated in the previous report, we could not conclusively distinguish between the vaccine strain and field challenge of the same genotype, especially when a sequence identity of between 99 and 100% was found [33] .
The data obtained from this study indicated that the IBV in Thailand undergoes genetic recombination. The natural recombination is contributing to the emergence of a new genotypes or IBV variants in the field. This event leads to difficulty in the prevention and control of IB. Thus, the development of the novel measures in prevention and control of IB must be required.
|
M OST analyses of the genetic basis of cancer risk in mice begin with an inbred strain with an unusually high rate of occurrence of a specific tumor type, and involve examination of back-crosses and congenic stocks to try to map the gene or genes responsible for the index lesion. Such an approach makes excellent use of the inbred nature of most laboratory mouse lines, in which uniform homozygosity often reveals the effects of recessive alleles at loci whose wild-type alleles prevent neoplastic transformation. The use of inbred stocks to screen for cancer-prone alleles has some limitations, however, including the possibility that the phenotype may depend upon a specific combination of recessive alleles that may be unlikely to occur together except in the context of deliberate inbreeding. The historical development of mouse genetics, linked for many years to the retention and selection of mouse stocks with unusually high tumor incidence, is also likely to have favored the detection of rare allele combinations of this kind.
For several years, we have been seeking evidence for polymorphic genes that modulate life span and age-sensitive traits in a four-way cross between four of the common laboratory inbred stocks. The mice of this UM-HET3 (UM-HET stands for ''University of Michigan genetically Heterogeneous'') stock are the progeny of (BALB/c 3 C57BL/ 6J)F1 females and (C3H/HeJ 3 DBA/2J)F1 males. The construction of this stock ensures that each of the test mice receives 25% of its genes from each of the four inbred grandparents, but none of the test mice receives two copies of any allele from any of the four progenitors. Thus, the test mice will not be homozygous at any loci, except those loci where by chance two of the four progenitors carry identical alleles. A genome scan of this population thus has the potential to reveal evidence for loci where a polymorphism modulates risk of a specific illness to an extent that is relatively independent of any one specific set of background genes.
In three successive studies of the UM-HET3 population, we have accumulated comprehensive necropsy data for over 1000 individual genotyped mice, of which 886 cases were judged to have died from a specific illness. We report here the results of genome scan calculations that reveal loci on mouse chromosomes 1, 4, and 6, and possibly chromosome 11, which modulate the risk of specific forms of neoplasia diagnosed at terminal necropsy.
All of the mice used derived from a four-way cross among four inbred strains: BALB/cJ (C), C57BL/6J (B6), C3H/HeJ (C3), and DBA/2J (D2). These experimental animals are the progeny of (C 3 B6)F 1 females and (C3 3 D2)F 1 males and are referred to as UM-HET3 mice. The F 1 breeding animals were purchased from The Jackson Laboratories (Bar Harbor, ME). Animals were housed segregated by sex (except for the BCA [Breast Cancer and Aging] females from 3 to 6 months of age; see below) in a single suite of specific pathogen-free (SPF) rooms and were exposed to identical environmental conditions (12:12 hour light:dark cycle, 238C). Mice were given ad libitum access to water and Purina (St. Louis, MO) laboratory mouse chow. The cages were covered with microisolator tops to minimize the spread of infectious agents. Sentinel mice were tested every 3 months to verify the pathogen-free status. The test battery always includes titers for Sendai, minute virus of mice (MVM), and coronavirus (mouse hepatitis virus), and one of the four quarterly tests includes, in addition, assessment of titers for PVM (pneumonia virus of mice), GD-7, Reo-3 (reovirus), Mycoplasma pulmonis, LCMV (lymphocytic choriomeningitis virus), ectromelia, K virus, polyoma virus, mouse adenovirus, and parvovirus. All such tests were negative throughout the course of the study. This work was approved by the Animal Care and Use Committee at the University of Michigan.
Necropsy data were pooled for analysis from three independent cohorts. The LAG1 (Longevity Assurance Gene) cohort consisted of mice born between 9/93 and 3/ 95, and included 131 virgin males and 147 virgin females. (The numbers given refer to those animals that were submitted for necropsy.) The BCA cohort consisted of mice born between 9/94 and 7/95, and included 293 female mice. All mice in the BCA cohort were caged with males (not part of the life span study) between 2 and 6 months of age, and nearly all of them gave birth to multiple litters at these ages, which were removed from the cages at age 21 days. The LAG2 cohort consisted of mice born between 3/98 and 10/ 99, and included 179 virgin males and 254 females.
Mice were culled from the colony for several reasons: (a) when male mice were found to have been seriously injured by bite wounds, all mice in the cage (regardless of health status) were culled, typically at age 6-14 months; (b) a small number of mice were removed from the database because technical errors resulted either in their escape from captivity or loss of information about exact date of death.
Mice were inspected at least daily. Mice suspected to be ill (because of weight loss, poor grooming, or visible tumor) were observed twice daily except on weekends. Mice judged by an experienced technician to be so severely ill that survival for more than a few additional days was unlikely were taken to the necropsy suite and humanely euthanized using CO 2 asphyxiation; this group made up 59% of the total. The set of criteria used for this decision include: (a) large and/or bleeding tumor mass; (b) inability to eat or to drink; (c) loss of more than 10% of body weight in a week; (d) lethargy and lack of responsiveness to gentle prodding; and (e) other signs of inanition, such as poor grooming and hunched posture. Mice found dead were also submitted for necropsy. The necropsy protocol has been described in detail elsewhere (1), and involved both gross inspection and histological examination of sections from 37 organs. Histopathological evaluation of cases in the LAG1 and BCA cohorts was conducted by the late Clarence Chrisp. Evaluation of cases in the LAG2 cohort was performed by Ruth Lipman. A selection of cases with a provisional diagnosis of fibrosarcoma (from the LAG1 and BCA cohorts) or hemangiosarcoma (from the LAG2 cohort) were directly compared to ascertain that the difference in diagnosis reflected physical differences in the lesion and not merely a difference in nomenclature. Although it is difficult to determine the cause of death in mice found dead, or to assign a cause of death in cases where severe morbidity prompted euthanasia of the animal, for this study, a lesion was deemed to be the cause of death if it was the sole serious lesion present, or if the lesion was so severe that it was likely to have led to the death of the mouse or to the symptoms that led the husbandry staff to submit the animal for euthanasia. Mice with no severe lesions or with multiple severe lesions were classified as ''unknown'' in the cause of death tabulation.
Genomic DNA was prepared from 1 cm sections of tail from 4-week-old animals using a standard phenol extraction method (2) . Final DNA preparations were tested for concentration, ability to sustain polymerase chain reaction (PCR) amplification under standard conditions, and electrophoretic size distribution. DNA was genotyped using an ALFexpress automated sequence analyzer (Pharmacia, Piscataway, NJ); the details of this genotyping method have been described previously (3) . Primer pairs were purchased from MWG Biotech, Inc. (High Point, NC). In total, 185 markers were examined from 99 genetic loci. Of the 99 loci, 86 markers were informative for both the maternal-and paternal-derived alleles and 13 loci were only informative for maternal or paternal. The selection of genetic loci was described previously in detail (4). Chromosomal localization and order of markers were calculated using the MapMaker QTX program package (Whitehead Institute, MIT, Cambridge, MA).
Qualitative Trait Loci (QualTL) is a maximum likelihoodbased method for interval mapping analysis of discrete phenotypes from experimental genetic crosses. From the conceptual point of view, QualTL is an example of a latent class model in which the categorical outcomes (phenotypes) are modeled as a function of unknown (unobservable) putative genotype at a putative marker. The computational algorithm is outlined in Galecki and colleagues (5) , and the method has been used successfully in a previous study of fitness and life span in Caenorhabditis elegans (6) . In the context of the current article, single marker analysis was used instead of interval mapping, so a simplified version of the QualTL algorithm corresponding to a standard logistic regression was used. The analysis performed is designed to evaluate the effects of genetic differences on the proportions of mice dying of specific causes, or with specific lesions. This approach is biased to an unknown extent by a potential confound between the cause of death and the life span of the mice; the data set cannot provide information about risks for specific diagnosis in mice at times later than the age at death.
Three life span experiments were conducted using genetically heterogeneous mice of the UM-HET3 stock, bred from a cross between CB6F1 dams and C3D2F1 sires. The LAG1 and LAG2 experiments each involved a mixture of virgin male and virgin female animals, and the BCA experiment involved female mice that had been repeatedly pregnant from age 2 months to 6 months. Previous publications have focused on the genetic control of T-cell subsets (3, 7) , life span (8, 9) , hormone levels (10), cataracts (11) , and body weight (12) in one or more of these populations. Table 1 summarizes key life table statistics for this set of longevity experiments.
A surveillance system, described in (1), was employed to try to increase the proportion of mice that produced useful diagnostic necropsies. Key features included at least daily inspection for recent deaths (typically twice a day for 5 days each week), and a willingness to euthanize mice that appeared so severely ill that survival for more than a few additional days seemed unlikely. This system allowed us to obtain informative necropsies on more than 90% of the mice in this study (excluding those culled for fighting or because of errors in record-keeping or mouse husbandry). An attempt was made to infer a most likely cause of death in each case, and this was possible in 886 of the 1004 cases evaluated. In the other 118 cases, no single cause of death could be assigned, either (a) because advanced tissue autolysis prevented evaluation, (b) because two or more lesions were judged to have contributed to death or moribund state, or (c) in a small number of cases, because no lesion serious enough to have caused death could be identified. Table 2 summarizes the cause of death diagnoses in each of the three independent cohorts, further stratified by sex for the LAG1 and LAG2 series. Table 2 also lists (right column) the total number of cases in the series with the diagnoses indicated. The ''miscellaneous nonneoplastic'' disease category included a wide range of illnesses, such as peritonitis, brain infarct, glomerulonephritis, inanition secondary to enamel organ dysplasia, myocardial degeneration, ovarian hemorrhage, polyarteritis nodosa, and various localized infections. The ''miscellaneous neoplastic'' category included diagnoses of mast cell tumor, rhabdomyosarcoma, islet cell carcinoma, pheochromocytoma, sperm granuloma, myxosarcoma, c-cell adenocarcinoma, mesothelioma, nerve sheath tumor, thyroid adenocarcinoma, leiomyosarcoma, osteosarcoma, and others.
Several features of Table 2 should be noted because they influenced the gene mapping strategy. There is evidence of unanticipated differences in pathology between the LAG1 and LAG2 cohorts. Mouse urinary syndrome, for example, was common among males in the LAG1 group, but not noted in the LAG2 males. In addition, we found unexpectedly that fibrosarcoma was frequently a cause of death in the LAG1 and BCA mice (18% of virgin females and 5% of virgin males), but was infrequent in the LAG2 series (3% of virgin females and 1% of virgin males). Conversely, hemangiosarcoma was rare in the LAG1 group (3% and 1% in females and males, respectively), but much more common in LAG2 (13% and 10%, respectively). These variations in lesion prevalence may reflect subtle environmental differences under which the cohorts of mice were raised. It is also apparent that the proportion of mice dying of mammary adenocarcinoma was far higher among multiparous females (24% of diagnosable cases) than for virgin females (9%), consistent with previous assessments of the effects of hormonal environment on mammary cancer in mice. A qualitative trait locus (QualTL) mapping procedure was then used to seek evidence for loci at which allelic differences modulated the likelihood of specific causes of death. Results for which experiment-wise p , .10 are summarized in the first five lines of Table 3 . Four of these met the criterion of p , .05, the preselected significance threshold for the protocol, and one (the maternal allele of D11Mit156 as a predictor of hepatocellular carcinoma) was suggestive but not definitive at p ¼ .09. For hepatocellular carcinoma and lymphoma, the calculations involved data from both male and female mice pooled together. Because the risk of mammary adenocarcinoma differs between multiparous and virgin females, the calculation for this cause of death was done three times, once for multiparous females (BCA cohort only), once for virgin females (LAG1 and LAG2 cohorts), and once for all females regardless of mating status. Only the multiparous females showed a significant genetic effect, which is therefore listed in Table 3 . The genome scan found no significant QualTL for hemangiosarcoma or for fibrosarcoma, but did find a strong association between D4Mit170p on mouse chromosome 4 and the likelihood that the mouse would die either of fibrosarcoma or hemangiosarcoma, considering these two histologically distinct lesions as a single combined category. Figure 1 illustrates the strength of association between alleles of D1Mit206 and D4Mit84 and the risk of dying because of lymphoma. For each of these loci, inheriting the D2 allele gives a substantially increased risk of death by lymphoma. The two effects are additive; as shown in the bottom panel of the figure, mice that inherit the D2 allele at both loci have a 36% chance of dying from lymphoma, compared to a 13% chance for mice that inherit the C3 allele at both loci. Figure 2 illustrates the strength of association between alleles at D4Mit55 and the risk of mammary adenocarcinoma death in multiparous female mice. The C3 allele at this locus increases the risk of death from breast cancer from 12% to 34%. When the outcome was ''incidental'' mammary adenocarcinoma, that is, the presence of this lesion at necropsy regardless of inferred cause of death, the incidence rates were 15% and 45%, respectively, a difference of threefold. No locus reached significance when the calculations included only virgin female mice, or when the group of virgin and multiparous female animals was evaluated together. For virgin females, the risk of death from mammary adenocarcinoma was 6.8% (12/176) in mice with the C3 allele at D4Mit55, and 6.7% (13/193) in mice with the D2 allele. Thus, the D4Mit55-linked locus seemed to modulate breast cancer risk in multiparous mice only.
A genome scan was also used to seek loci that influenced the likelihood of ''incidental'' lesions, that is, lesions noted at necropsy whether or not they were thought to have led to death or moribund status. Only one new association was detected, using an experiment-wise significance criterion of p , .05, with this set of outcome measures: As shown at the bottom of Table 3 , the C allele at locus D6Mit198 was found to be associated with the appearance of pulmonary adenocarcinoma at necropsy (p ¼ .007). This lesion was found in 23% of the mice with the C allele at D6Mit198, and in 13% of those with the B6 allele. Pulmonary adenocarcinoma was more common in males than in females (see Table 2 ), but the effect of the locus linked to D6Mit198 was apparent both in male mice (36% vs 20%; p , .003 by chi-square test) and in female mice (17% vs 9%; p ¼ .012). Although the association between D6Mit198m and lethal pulmonary adenocarcinoma did not reach experiment-wise significance in the genome scan, a post hoc analysis by chi-square testing found higher rates of lethal pulmonary adenocarcinoma associated with this locus in males (22% vs 12%, p , .03), though not in females (7% vs 4%, p , .16) . c Indicates the group of mice evaluated (all mice, or just multiparous females), and the number of mice included in the calculation. Values of ''N'' differ slightly because the amount of missing genotype data varied among the SSLP marker loci. d Gives the proportion of cases with indicated trait for each of the two relevant genetic alleles. e The permutation-based experiment-wise probability level for the association. f An asterisk indicates those associations that reach the p , .05 significance criterion. g Pos (cM), Pos (mb), and MGI indicate, respectively, the position of the SSLP marker in centimorgans from the centromere, the position of the SSLP marker in millions of base pairs, and the identification code in the Mouse Genome Informatics database.
SSLP ¼ simple sequence length polymorphism; NA ¼ not available; QTL ¼ quantitative trait locus.
Our previous work has found evidence, in the LAG1 population, for genetic loci on chromosomes 2, 9, 10, 12, and 16 with significant effects on life span in male mice, female mice, or both (8, 9) , and has shown that the effects of at least three of these loci modulate life expectancy both in mice whose death is caused by neoplasia and those that die of a nonneoplastic disease (9) . The current study does not focus on length of life, but instead on genes that modulate the cause of death rather than its time of occurrence.
Necropsy cases from three separate studies of the same genetic cross were combined to increase statistical power, even though this strategy runs the risk that differences among the studies may have diminished the strength of gene/trait association in the pooled data set. Some of these differences are matters of design, such as the inclusion of virgin mice in LAG1 and LAG2 and the use of multiparous females for the BCA population. It is clear, though, that the studies also differ in ways that were unanticipated. For example, males in the LAG1 group had a high incidence of the mouse urinary syndrome (13, 14) , which for unknown reasons was not observed among the LAG2 males. Fibrosarcoma, though commonly judged to be the cause of death among the LAG1 mice (particularly among the virgin females), was much less common in the LAG2 series; conversely, hemangiosarcoma was a frequent cause of death in the LAG2 mice only. Because each population was produced by breeding (BALB/c 3 C57BL/6J)F1 females to (C3H/HeJ 3 DBA/2J)F1 males, we can only speculate that these differences reflect some uncontrolled factor(s), such as unknown contaminants in water or food or subtle alterations in environmental noise, air quality, or the like.
The genome scan results for lymphoma are the most straightforward. In the UM-HET3 population, lymphoma is the most frequent single cause of death, responsible for the deaths of about 35% of virgin females, 18% of multiparous females, and 8%À25% of virgin males. Mice inheriting the D2 allele at D1Mit206 or the D2 allele at D4Mit84 were at greatest risk of dying from lymphoma, with each of these two loci contributing about an equal risk in an additive fashion.
The C3 allele at D4Mit55 led to a nearly three-fold increase in risk that a multiparous female would die of mammary adenocarcinoma, but this effect was not seen in virgin females of the LAG1 or LAG2 studies, for which the two alleles were associated with mammary cancer incidence rates of 6.8% and 6.7%. The basis for the increased risk of mammary adenocarcinoma in multiparous as compared to virgin female mice is not understood, and contrasts with the relation between mammary cancer risk and child-bearing in humans. It seems likely that the locus on chromosome 4 modulates the factors, presumably hormonal, that increase mammary cancer risk in multiparous female mice, although it is noteworthy that mating increases the risk of breast cancer (from 6.7% to 12%) even in mice that inherit the low-risk allele at D4Mit55. The proneoplastic effect of mammary tumor virus transmitted in milk from C3H/HeJ females to their offspring does not influence tumor risk in UM-HET3 mice, in which the C3H/HeJ alleles are contributed by the paternal grandfather.
The B6 allele at D11Mit156 seems to lead to a six-fold increase in the risk of hepatocellular carcinoma in these mice. This tumor is fairly rare as a cause of death, however, with only 34 cases seen among the 886 diagnosable cases, and the association with the chromosome 11 allele did not reach our criterion for experiment-wise significance.
Pulmonary adenocarcinoma was seen at necropsy primarily in those mice that inherited the C allele at D6Mit198. This is a lesion much more common in male than in female UM-HET3 mice, and post hoc analyses showed that this C allele was associated with an 80% increase in the risk of lethal pulmonary tumors in male mice, with an insignificant effect in females. We can postulate that the sex-specific factors that increase risk of lung cancer in these males may themselves be modulated by this chromosome 6 polymorphism. These data also provide a new perspective on a prior study (15) of strain differences in tumor susceptibility in mice given a single oral dose of the potent carcinogen 7,12-dimethylbenz[a]anthracene (DMBA). This earlier work included analyses of induced tumor incidence in BALB/cJ, C3H/HeJ, C57BL/6J, and DBA/2J mice, the grandparental stocks used to produce UM-HET3 mice. Among the four inbred stocks, the highest incidence of pulmonary adenocarcinoma was seen in the BALB/c mice, and it is thus tempting to note the provisional association of the C allele at D6Mit198 with this lesion in the UM-HET3 stock.
The results for hemangiosarcoma and fibrosarcoma are the more difficult to explain. There are two potential confounding factors: (a) slides from the LAG2 mice were not evaluated by the same individual responsible for evaluating the LAG1 and BCA cohorts; and (b) the two groups of animals were produced and raised several years apart, and might therefore have been exposed, inadvertently, to different dietary or environmental influences. Comparison of a sample of cases with these lesions from each of the two necropsy series confirmed that they were indeed histologically distinct. Angiosarcoma in humans has been noted to exhibit microscopic heterogeneity; in one series, for example (16) , all cases were reported to have focal vasoformative components similar to those in hemangiosarcomas in the LAG2 cohort, but 7% of the cases also contained solid components resembling the fibrosarcoma seen in the LAG1/BCA mice. Although it is possible that sampling at necropsy resulted in systematic differences in lesion presentation between the two series of cases, we do not consider this a likely explanation for the disparity. The second possibility is that the difference in lesion presentation was due to unknown environmental factor(s) that were different for the two groups of mice. We suspect that some unknown variation in environmental conditions between 1995 and 1998, such as an inadvertent change in the composition of the natural products diet or some alteration in water quality, may have led to the reciprocal increase in vascular neoplasia and reduction in fibrosarcoma risk, but we cannot put this idea to any immediate test. Our genetic results show that an allele linked to D4Mit170 in the BALB/c genome leads to a two-fold increase in risk of hemangiosarcoma and fibrosarcoma compared to the corresponding B6 allele. The experiment-wise p value , .009 makes it unlikely that this is a chance association. We thus hypothesize that the vascular tumors and sarcomas may, in UM-HET3 mice, share a common underlying basis, modulated by this chromosome 4 locus.
It is noteworthy that a previous study of the LAG1 mice, using a separate computational approach, found evidence for two loci on chromosome 4 with an influence on fibrosarcoma risk (17) . The strongest association was with a locus mapping to the proximal portion of chromosome 4, between 0-20 cM from the centromere. A second, weaker, association was seen for paternal alleles mapping between 50-92 cM. The current study found no evidence for the proximal gene, but did detect a paternal allele linked to marker D4Mit170 at 67 cM. These two results, though consistent for the distal marker, cannot be considered as independent confirmation, because the LAG1 mice evaluated by Xu and colleagues were included in the current, much larger data set. It is of interest that the proximal quantitative trait locus for LAG1 mice did not produce a detectable signal in the current population pooled across all three study groups.
Our earlier papers, in which UM-HET3 mice were used to map polymorphic loci for age-sensitive traits including Tcell subsets (3, 7) , life span (8, 9) , hormone levels (10), and body weight (12) used traditional quantitative trait locus methods, in which the outcome, age at death, was a continuous quantitative trait. In contrast, the current data set was evaluated using an algorithm adapted for mapping of qualitative traits, in which the phenotype takes on one of a small number of discrete values, in the current instance, either the presence or the absence of a specific lethal lesion.
This necropsy series provides a valuable foundation for future studies of the genetic and molecular factors that influence disease in a genetically heterogeneous mouse population. Each mouse in these three cohorts has been genotyped at . 150 biallelic markers, and DNA has been archived that would support fine-scale mapping of chromosomal regions of interest in later studies. Tissue blocks and stained sections are available from each of the necropsy cases as well, which could promote follow-up studies of questions concerning the histopathology of spontaneous late-life tumors, and, potentially, of their molecular constituents as evaluated by tissue microarrays or related immunohistochemical techniques. Each mouse in the series has also been tested for levels of several serum hormones (10, 18) and for age-sensitive T-cell patterns (3, 7) . The combination of genetic, physiological, and terminal histopathological resources for these mice provides a unique resource for future studies of tumor biology in aging animals.
|
The highly contagious severe acute respiratory syndrome (SARS) affected individuals in 30 countries in 2002 and 2003 [1] . Its causative agent was identified as a novel SARS-associated coronavirus (SARS-CoV) [1, 2, 3] that was initially classified as part of a separate coronavirus group [4, 5, 6, 7] , but is now described as a betacoronavirus [8] . As with most coronaviruses, SARS-CoV encodes four structural proteins: spike (S), membrane (M), envelope (E) and nucleocapsid (N) [4, 9] . Mature coronavirus particle assembly involves protein-protein and protein-RNA interactions. M, the most abundant structural protein [10] , is thought to play a central role in directing virus assembly and budding via interaction with E, S and N [10, 11, 12, 13, 14, 15, 16, 17, 18] . Translated on free polysomes, N is associated with newly synthesized viral genomic RNA to form helical nucleocapsids [19] . The M membrane glycoprotein is cotranslationally inserted into the endoplasmic reticulum (ER) and transported to Golgi complexes [20, 21] . M interacts with nucleocapsids on the cell membranes of ER or Golgi complexes [22, 23, 24, 25, 26] . In a similar manner, S and E proteins are translated on membrane-bound polysomes, inserted into the ER, and transported to Golgi complexes, where E and M interact and trigger virion budding with enclosed nucleocapsids [14, 19] . S is incorporated into virions via interactions with M. Virions accumulate in large, smooth-walled vesicles that are exocytotically released from cells [4] .
Despite lacking a significant amino acid sequence homology, SARS-CoV M shares structural and functional similarities with other coronavirus M proteins [27] . In addition to having an amino-terminal ectodomain, a triple-membrane spanning domain, and a carboxyl-terminal endodomain [19, 28] , coronavirus M proteins localize exclusively in the ER/Golgi area [29, 30, 31] . However, the M proteins of SARS-CoV, the transmissible gastroenteritis virus, and the feline infectious peritonitis virus are all capable of reaching the plasma membrane [32, 33, 34, 35] .
M plus E [36, 37, 38] or M plus N [16, 39] are minimum requirements for SARS-CoV VLP formation, and the combined expression of M, N and E is necessary for efficient VLP production [40] . SARS-CoV M has been detected in medium when expressed alone [37] . We previously demonstrated that SARS-CoV M is capable of self-association and secretion into medium as membrane-enveloped vesicles with a buoyant density slightly less than that of VLPs formed by M plus N [41] . Since N is undetectable in medium without M coexpression, it appears that SARS-CoV M directs VLP assembly by incorporating N into VLPs. Accordingly, mutations that block SARS-CoV M self-assembly or secretion also block VLP assembly, regardless of their effect (or lack of) on M-N interaction.
Our goal in this study was to identify specific SARS-CoV M amino acid residues that are critical for VLP assembly. Sitedirected mutagenesis results suggest the involvement of M cytoplasmic tail dileucine residues in the packaging of N into VLPs. We observed that amino acid residues that are important for M self-assembly or secretion are dispersed along the carboxyl-terminal endodomain and the amino-terminal region, including the transmembrane domains. This finding supports the proposal that multiple SARS-CoV M regions are involved in M self-assembly. Here we will report on our identification of several amino acid residues that may play a role in SARS-CoV assembly.
SARS-CoV M contains three cysteine residues: C63 and C85 are found at the second and third transmembrane domains, respectively, and C158 is located at the carboxyl-terminal endodomain (Fig. 1) . Results from our tests to determine whether cysteine residues play a role in SARS-CoV VLP assembly indicate that a serine substitution at C63 or C85, or a combined C63/85S double-mutation, did not significantly affect VLP assembly and release ( Fig. 2A , lanes 11, 12 and 14) . In contrast, single or combined double or triple substitutions of cysteine residues involving C158 markedly affected VLP production, likely a result of reduced M secretion. Note that secreted M mutants carrying the C158S mutation are glycosylated form-deficient ( Fig. 2A upper panel, lanes 13, 15 and 16), suggesting an M-C158S maturation defect via the classical secretory pathway (54) . As a control, N expressed by itself was not released into medium (Fig. 2B, lane 8 ). An HA tagged at the M amino-terminus (HA-M) had no major effect on M release and N packaging; in contrast, a FLAG tagged at the M carboxyl terminus (M-FLAG) significantly affected N incorporation (Fig. 2B, lane 11 ). This is consistent with previously reported results [41] . We found that N coexpression slightly increased wt or secretion-competent M mutant release, but had little effect on the release of secretion-defective mutants. This observation does not alter our conclusion that N release or VLP production depends on the presence of secretion-competent M proteins. Since M+N VLP assembly is determined by M release capacity, we determined the mutational effects on M release capacity under N coexpression conditions. The results shown in Figure 2A suggest that C158 is necessary for SARS-CoV M selfassembly or release.
To determine whether any of the three cysteine residues exist at the M dimer interface, we treated VLPs with bismaleimidohexane (BMH), a cysteine-specific cross-linking reagent. The HIV-1 precursor Pr55gag, which is capable of self-assembly into VLPs, served as a positive control. As shown in Figure 2C , we noted a band of approximately 110 kDa corresponding to the Pr55gag dimer (lane 4, arrowhead), which is consistent with an earlier report that cysteine residues in the Pr55gag interaction (I) domain are capable of cross-linking via BMH [42] . In contrast, we failed to detect dimeric or multimeric forms of M in repeat independent experiments (Fig. 2D, lanes 5 and 7) . Combined, the data suggest that the C63, C85 and C158 residues found in the M dimer in a VLP context are not close enough to be cross-linked by BMH.
While constructing the C85S mutation, we unintentionally created a triple mutant (C85S/F95L/S110G) that was severely defective in VLP assembly. Results from further analysis indicate that either the F95L or S110G mutation significantly affected M secretion and VLP production (data not shown). Since the S110 mutation is located in the highly conserved 107-SWWSFNPE-114 motif, it is not surprising that S110G impaired VLP assembly. When analyzing the impacts of a F95L mutation, we found that an alanine substitution for F95 significantly impaired VLP assembly, but a tryptophan substitution did not (details given below). This indication of an important VLP assembly role for the conserved aromatic residue at codon 95 served as our motivation to investigate similar roles for the nearby residues W91 and Y94two conserved aromatic residues that co-reside with F95 in the third transmembrane domain. We also created alanine and leucine substitutions for W57, which is located in the second transmembrane domain (Fig. 1) . P58, which is conserved and located next to W57, was changed into alanine because both tryptophan and proline may play a role in protein-protein interaction [43, 44] . Based on one research team's proposal that the dileucine motif is involved in sorting and trafficking [45] , we replaced two dileucine motifs located in the carboxyl-terminal (L218-L219) region with alanine. Based on past results suggesting that the M self-association domain is largely located among 50 amino-terminal residues [41] , we changed an amino-terminal dileucine motif (L15-L16) and the W19 conserved aromatic residue into alanines to determine whether they are involved in VLP assembly.
For an additional control we used the N-linked glycosylation blocking mutation N4Q, which is known for not exerting any major impacts on virus assembly or M trafficking [35, 46] . As expected, unglycosylated N4Q was capable of producing VLPs at near-wt levels (Fig. 3A , lanes 9 versus 10). The dileucine mutation 15LL/AA did not significantly affect M secretion or N packaging (Fig. 3A, lane 11 ). M-15LL/AA was almost found in glycosylated form (Fig. 3A, lanes 3 and 11) . A possible explanation is that the 15LL/AA mutation may facilitate the M glycosylation process. The glycosylated form of M was occasionally (and predominantly) detected in medium (Fig. 3B, lane 16) ; this is insufficient evidence to confirm that N-glycan modification makes a significant Underlined mutations denote that changing the aromatic residue to Ala or Leu markedly affected M secretion, but replacement with another aromatic residue did not. The ability for each construct to release or produce VLPs with coexpressed N is summarized in Table 1 . doi:10.1371/journal.pone.0064013.g001 Medium pellet samples corresponding to 50% of total and cell lysate samples corresponding to 5% of total were fractionated by 10% SDS-PAGE and electroblotted onto nitrocellulose filters. SARS-CoV M was probed with rabbit antiserum, and N contribution to M secretion or VLP assembly, since M-N4Q devoid of N-glycosylation is also competent in self-secretion and N packaging.
Substitution mutations at the M carboxyl tail (218LL/AA) resulted in a statistically insignificant decrease in M secretion, with coexpressed N barely detectable in medium (Fig. 3A, lane 13 ). This suggests that the 218LL/AA mutation may impair M-N association, which would agree with previous reports that the SARS-CoV M carboxyl-terminal region is involved in M-N interaction [16, 47] . Alanine or leucine changes in W19 ( 4A, lanes 13) . We have tried to find an adequate explanation for this observation. It may be that the released chimeric vesicles contain greater amounts of F95L than of HA-M or M-FLAG due to favorable F95L incorporation. We observed that in the presence of M-FLAG or HA-M, immature unglycosylated forms of W91A, Y94A and F95L were more abundant in medium than their glycosylated counterparts, while F95A had a greater abundance of the unglycosylated form in medium when coexpressed with HA-M (Fig. 4A , lanes 10-13, lower vs. upper arrowheads). This is likely evidence of preferential association between HA-M or M-FLAG and immature unglycosylated mutant forms. Combined, these results suggest that some secretion-defective mutants can be rescued into wt M particles via M-M interaction.
We performed co-immunoprecipitation experiments to further determine whether reduced M secretion was due to a selfassociation defect. First, we individually coexpressed secretiondefective mutants with their FLAG-tagged counterparts. As shown in Figure 4B , W19A, W91A, Y94A or F95L were coprecipitated with their FLAG-tagged versions. According to velocity sedimentation analyses of cell lysates containing expressed M proteins, W19A, W91A, Y94A and F95A/L were capable of multimerizing into high-molecular-weight complexes in a pattern that was difficult to distinguish from that of the wt M (data not shown). These results suggest that secretion-defective mutants are capable of a certain level of self-association despite defective VLP assembly or release.
Results from immunofluorescence studies indicate that wt or secretion-competent mutants are localized in both perinuclear and plasma membrane areas. Although most of the secretion-defective mutants did not show plasma membrane localization, the secretion-defective W19A was found in both plasma membrane and perinuclear areas-a staining pattern indistinguishable from that of the wt (data not shown). This suggests that the M release defect is not completely attributable to a defect in plasma membrane localization.
We predicted that in cases where mutants express delayed assembly or budding, medium VLP quantities would increase as incubation time increased. Our data indicate that most of the secretion-defective mutants were readily detectable in culture supernatant 48 h post-transfection (Fig. 5A , lanes 17-20 and Fig. 5B, lanes 12-14) . This finding suggests that reductions in M secretion or VLP production are partly due to delays in assembly or budding. However, after 48 h of incubation, N remained barely detectable in medium, or detectable but not equivalent to the level of released M, suggesting a mutant defect in terms of N packaging. This finding also suggests that the M mutations may have affected N viral incorporation in addition to impairing M secretion.
Next, we investigated whether secretion-defective M mutants can still interact with N, and whether the failure of secretioncompetent 218LL/AA to form VLPs is due to a defect in N association. N, wt or mutant M was coexpressed with GST-N (a GST fused to the N amino-terminus) and subjected to GST-pull down assays. Since N contains a dimerization domain, N association with GST-N was used as a control [48, 49] . As expected, N was efficiently pulled down by GST-N. With the exception of C158S, all of the tested secretion-defective M mutants were co-pulled down with GST-N (Fig. 6) , suggesting that M mutants are still capable of N association despite being defective in terms of cell release. Unexpectedly, 218LL/AA and M-FLAG were efficiently pulled down by GST-N (Fig. 6C, lanes 11 and 12) . Similar results were obtained from co-immunoprecipitation experiments using an anti-N antibody (Fig. 6D) . Levels of Nassociated M-C158S, as determined by GST pull-down or coimmunoprecipitation assays, were lower compared to those of the other mutants used in this study (Fig. 6C, lane 10 and Fig. 6D, lane 8) . These data suggest that the C158S mutation significantly affected M-N interaction, and that the 218LL/AA and M-FLAG mutations at the M carboxyl-terminal tail prevented N from viral incorporation during M-directed virion morphogenesis, even though they did not significantly affect M-N interaction.
To determine whether the intracellular association between mutant M and N leads to VLP formation, cells coexpressing N and either wt or mutant M were observed with a transmission electron microscope (TEM). As expected, numerous VLPs were observed localized in the perinuclear areas of cells coexpressing wt M and N (Fig. 7, panels A and B) , which is consistent with a previous report [16] . Intracytoplasmic vesicles containing VLPs were also observed (Fig. 7A, arrowheads) . Further, we noted VLPs near the nuclei of cells coexpressing N and secretion-defective W91A (Fig. 7D ). VLPs were also detectable in cells coexpressing N and Y94A, F95L or P58A M mutants (Figs. 7E-7I). Figure 5 shows culture supernatants collected from cotransfectants 24 or 48 h post-transfection. At 24 h, spherical particles approximately 100 nm in diameter were observed in wt M and N cotransfectant samples (Fig. 7J) , which is consistent with previous results. In contrast, VLPs from cells coexpressing N and P58A were undetectable or barely detectable until 48 h post-transfection (Fig. 7K) . Also at 24 h post-transfection, medium samples containing P58A plus N cotransfectants were almost identical to the mock-transfected sample-that is, we detected some vesicles, but no VLPs (Fig. 7L) . VLPs were barely detectable in culture supernatants derived from cells expressing N plus W91A, Y94A, F95L/A, C158S or 218LL/AA. No VLPs were found in cells coexpressing 218LL/AA plus N, and no VLPs were detected during TEM observations of concentrated gradient fractions of cell lysates containing 218LL/AA and N. In contrast, all other secretion-defective M mutants were still capable of associating with N and subsequently forming some intracellular VLPs, despite being VLP release-defective.
According to our results, the effects of SARS-CoV M mutations on VLP assembly are largely determined by the capability of M mutants to be released from cells. According to Siu et al. [40] , M expressed alone is barely secreted, and N coexpression is required for efficient M release from Vero E6 cells [40] . This observation may be due to different expression systems used in the two studies. It could be that 293T cells release and/or express SARS-CoV membrane proteins more efficiently than Vero E6 cells. This may partly explain why 293T cells produce VLP-associated spike (S) proteins at levels 28-fold higher than Vero E6 cells, as reported by Siu et al. [40] . Nevertheless, the capability of SARS-CoV M to self-assemble and release or to form VLPs with coexpressed N represents a convenient strategy for determining M-M and M-N interaction domains.
M-C158S is defective in both secretion and N association, suggesting that C158 residue is important for SARS-CoV assembly. Although we did not detect any intermolecular disulfide linkages, there is still a possibility of intramolecular disulfide bonds occurring during M self-assembly or secretion, which would partly explain why the C158S mutation exerts a significant effect on M release. Evidence showing that secretion-defective SARS-CoV M mutants are capable of self-association or association with wt M supports the proposal that multiple M regions are involved in selfassociation [52] .
The inability of the 218LL/AA or M-FLAG mutant to package N suggests that the M carboxyl-terminal tail domain is responsible for M-N interaction. At least three research teams have suggested that the coronavirus M carboxyl tail region is important for N association [26, 47, 50] . However, our GST pull-down and coimmunoprecipitation experiment results suggest that alanine substitutions for the highly conserved dileucine motif 218-LL-219 failed to significantly impact SARS-CoV M-N interaction; this finding agrees with data from yeast two-hybrid and surface plasmon resonance (SPR) assays [47] . Nal et al. [54] have proposed that SARS-CoV M recycles to Golgi complexes via endocytosis once it reaches the plasma membrane. Accordingly, substitution mutations at 218-LL-219, or a FLAG tagged at the carboxyl-terminus, may block M sorting or trafficking to the Golgi area, resulting in defective VLP formation. This scenario may partly explain why M-FLAG and 218LL/AA were capable of N association following the disruption of cellular compartments.
The M carboxyl-terminal tail is also important for MHV assembly: the deletion of a single amino acid residue from the M carboxyl terminus acts as a significant barrier to VLP assembly [51, 52] . MHV VLP formation is dependent on coexpression with the E protein [10, 52] . Accordingly, impaired MHV VLP assembly due to a M carboxyl-terminal mutation is largely the result of a defect in M-E interaction [52] , whereas SARS-CoV M carboxylterminal mutations such as 218LL/AA and M-FLAG do not affect M self-assembly and release. In a previous study we observed that SARS-CoV E is also secretable into culture medium (unpublished results); this is in agreement with a report that levels of SARS-CoV VLPs formed by M plus N noticeably increased following E coexpression [40] . However, E coexpression did not significantly enhance the VLP yields of SARS-CoV M mutants (data not shown), suggesting that E is incapable of compensating for M mutants in terms of directing VLP assembly.
With the exception of C158, we found that all of the identified amino acid residues deemed important for SARS-CoV M selfassembly were either proline or aromatic (e.g., tryptophan, phenylalanine or tyrosine). Aromatic residues have been shown to mediate the self-assembly of different soluble proteins via ,pi.-,pi. interactions between polar aromatic rings [55, 56, 57] . Aromatic side chains have been proposed as favoring intra-and inter-peptide electrostatic interaction contributing to protein secondary structure and stable protein-protein interaction [58] . One research team has demonstrated that an aromatic-X-Xaromatic motif located in the transmembrane (TM) domain of EpsM (a cholera toxin secretion protein) is essential for stabilizing TM dimerization [59] . It is likely that the SARS-CoV M aromatic-XX-aromatic motif (91-WXXY-94), which resides in the predictive second TM domain, serves a similar function in stabilizing M dimerization. In addition, a more recent study suggests that coronavirus M is capable of adopting two conformations associated with membrane curvature regulation [53] . Accordingly, the replacement of conserved aromatic residues with alanine or leucine may disrupt M conversion from one form to another, resulting in a membrane-bending defect. This scenario may partly account for decreased mutant VLP yields.
It remains to be determined whether the other M aromatic residues are important for SARS-CoV assembly. Our preliminary study found that an alanine substitution at the highly conserved W54 exerted no detectable effect on SARS-CoV VLP assembly, suggesting that some conserved aromatic residues are not involved in that process. The replacement of SARS-CoV M codons W19, W57, W91, Y94 or F95 with other aromatic residues did not exert detrimental effects on VLP assembly. A future task is to determine if the same is true for other M aromatic residues.
Codon optimized SARS-CoV M and N expression vectors were provided by G. J. Nabel [16] . A pair of upstream and downstream primers was used to amplify M-coding fragments via PCR-based overlap extension mutagenesis [60] with the SARS-CoV M expression vector serving as a template: the 59-GTCTGAGCAG-TACTCGTTGCTG-3 forward primer (referred to as the N primer) and the 59-GGAAAGGACAGTGGGAGTGGCAC-39 reverse primer. Oligonucleotide primers containing the substitution mutation codons were available on request. Purified PCR product was digested with BamHI and EcoRV and ligated into the SARS-CoV M expression vector. GST-N or HIV-1 Gag expression vector has been described elsewhere [61] . Cell Culture and Transfection 293T cells and HeLa were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal calf serum (GIBCO). Confluent cells were trypsinized and split 1:10 onto 10 cm dishes 24 h prior to transfection. For each construct, cells were transfected with 10 mg of plasmid DNA using the calcium phosphate precipitation method; 50 mm chloroquine was added to enhance transfection efficiency. Unless otherwise indicated, 5 mg of each plasmid was used for co-transfection.
At 24-48 h post-transfection, supernatant from transfected cells was collected, filtered, and centrifuged through 2 ml of 20% sucrose in TSE (10 mM Tris-HCl [pH 7.5], 100 mM NaCl, 1 mM EDTA plus 0.1 mM phenylmethylsulfonyl fluoride [PMSF]) at 4uC for 40 min at 274,0006g. Pellets were suspended in IPB (20 mM Tris-HCl [pH 7.5], 150 mM NaCl, 1 mM EDTA, 0.1% SDS, 0.5% sodium deoxycholate, 1% Triton X-100, 0.02% sodium azide) plus 0.1 mM PMSF. Cells were rinsed with ice-cold phosphate-buffered saline (PBS), collected in IPB plus 0.1 mM PMSF, and microcentrifuged at 4uC for 15 min at 13,7006g to remove unbroken cells and debris. Supernatant and cell samples were mixed with equal volumes of 2X sample buffer (12.5 mM Tris-HCl [pH 6.8], 2% SDS, 20% glycerol, 0.25% bromphenol blue) and 5% b-mercaptoethanol and boiled for 5 min or (for the M-containing samples) incubated at 45uC for 10 min. Samples were resolved by electrophoresis on SDS-polyacrylamide gels and electroblotted onto nitrocellulose membranes. Membrane-bound M and M-FLAG proteins were immunodetected using a SARS-CoV M rabbit anitserum (Rockland). For SARS-CoV N detection, a mouse monoclonal antibody [62] was used at a dilution of 1:5,000. The secondary antibody was a sheep anti-mouse or donkey anti-rabbit horseradish peroxidase-(HRP) conjugated antibody (Invitrogen), both at 1:5,000 dilutions.
Confluent HeLa cells were split 1:80 onto coverslips 24 h before transfection. At 24 h post-transfection, cells were washed with PBS and permeabilized at room temperature for 10 min in PBS plus 0.1% Triton X-100 following fixation at 4uC for 20 min with 3.7% formaldehyde. Samples were incubated with a rabbit anti-SARS-CoV M or with a mouse anti-N monoclonal antibody at a dilution of 1:1000 for 1 h. A goat anti-rabbit rhodamineconjugated antibody or a rabbit anti-mouse fluorescein isothiocyanate-conjugated antibody (Cappel, ICN Pharmaceuticals, Aurora, OH) at a 1:100 dilution for 30 min. Following each incubation, samples were subjected to three washes (5 to 10 min each) with DMEM/calf serum. After a final DMEM/calf serum wash, the coverslips were washed three times with PBS and mounted in 50% glycerol in PBS for viewing. Images were analyzed and photographs taken using the inverted laser Zeiss Axiovert 200 M microscope. Velocity Sedimentation Analysis of Cytoplasmic M Proteins Cells were rinsed twice with PBS, pelleted and resuspended in 1 ml TEN buffer (10 mM Tris-HCl [pH 7.4], 1 mM EDTA, 100 mM NaCl) containing Complete protease inhibitor cocktail followed by homogenization using a sonicator. The cell lysates then were centrifuged at 3,000 rpm for 20 min at 4uC. Five hundred ml of the postnuclear supernatants were mixed with an equal amount of TEN buffer, and were then applied to the top of a pre-made 25-45% discontinuous sucrose gradient. This gradient was prepared in TEN buffer containing 1 ml of each of 25%, 35%, and 45% sucrose. The gradient was then centrifuged at 130,000 6g for 1 hour at 4uC. Five 0.8-ml fractions were collected from the top of the centrifuge tubes. The proteins present in aliquots of each fraction were precipitated with 10% TCA and subjected to western blot analysis as described in the membrane flotation assay.
293T cells transfected with FLAG-tagged M expression vector were collected in lysis buffer (50 mM Tris-HCl [pH 7.4], 150 mM NaCl, 1 mM EDTA, 1% Triton X-100) containing complete protease inhibitor cocktail (Roche) and microcentrifuged at 4uC for 15 min at 13,7006g (14,000 rpm) to remove unbroken cells and debris. Aliquots of post-nuclear supernatant (PNS) were mixed with equal amounts of 2X sample buffer and held for Western blot analysis. Lysis buffer was added to the remaining PNS samples to final volumes of 500 ml, and each sample was mixed with 20 ml of anti-FLAG affinity gel (Sigma). All reactions took place at 4uC overnight on a rocking mixer. Immunoprecipitate-associated resin or bead-bound complexes were pelleted, washed tree times with lysis buffer, two times with PBS, eluted with 1X sample buffer, and subjected to SDS-10% PAGE as described above.
Cross-linking reagent bis-maleido hexame (BMH; Pierce) was prepared in dimethyl sulfoxide (DMSO) as a 20 mM solution. Virus-like particles were prepared in PBS and aliquoted at 20-ml fractions that mock-treated with 1 ml DMSO or treated with 1 ml of 20 mM BMH in DMSO. Reaction mixtures were vortexed gently and incubated for 1 h at room temperature. Samples were mixed with equal volumes of 2X sample buffer and 5% bmercaptoethanol and incubated at 45uC for 10 min prior to electrophoresis.
Cells were harvested 24 h post-transfection and fixed in 0.1 M Cacodylate buffer containing 2.5% glutaraldehyde, post-fixed with 1% osium tetroxide, dehydrated in ethanol and embedded in Spurr resin. Thin section was cut with an ultramicrotome, stained with 5% uranyl acetate and 0.4% lead citrate. Concentrated viral samples were placed onto carbon-coated, UV-treated 200 mesh copper grids for 2 min. Sample-containing grids were rinsed for 15 secs in water, dried with filter paper, and stained for 1 min in filtered 1.3% uranyl acetate. Excess staining solution was removed by applying filter paper to the edge of each grid. Grids were allowed to dry before viewing with a JOEL JEM-2000 EXII transmission electron microscope. Images were collected at 20,0006 and 60,0006.
|
Abstract: In critically ill patients with coronavirus disease 2019, there has been considerable debate about when to intubate patients with acute respiratory failure. Early expert recommendations supported early intubation. However, as we learned more about this disease, the risks versus benefits of early intubation are less clear. We report our findings from an observational study aimed to compare the difference in outcomes of critically ill patients with coronavirus disease 2019 who were intubated early versus later in the disease course. Early need for intubation was defined as intubation either at admission or within 2 days of having a documented Fio 2 greater than or equal to 0.5. In the final sample of 111 patients, 76 (68%) required early intubation. The mean age among those who received early intubation was significantly higher (69.79 ± 12.15 vs 65.03 ± 8.37 years; p = 0.038). Also, the patients who required early intubation had significantly higher Sequential Organ Failure Assessment scores at admission (6.51 vs 3.48; p ≤ 0.0001). The outcomes were equivocal among both groups. In conclusion, we suggest that the timing of intubation has no impact on clinical outcomes among patients with coronavirus disease 2019 pneumonia.
To the Editor: P atients with coronavirus disease 2019 (COVID-19) vary from being asymptomatic to experiencing life-threatening critical illness. The basic pathophysiology of severe viral pneumonia is hypoxic respiratory failure secondary to severe acute respiratory distress syndrome (ARDS) (1). Most patients admitted to the ICU end up requiring mechanical ventilation (2, 3) . In patients with ARDS, delays in intubation have been associated with higher mortality (4, 5) . However, in patients with COVID-19, there has been considerable debate on how to optimize management of acute respiratory failure/ARDS. Unique to COVID-19, many patients present with significant hypoxia, but only few other signs typically seen in patients with respiratory failure. Although these patients seem quite stable at first, respiratory decompensation occurs frequently. When this does occur, the trajectory is typically quite rapid, making the process of intubation a perilous affair. As a response, several experts from China, Europe, and the United States supported a strategy of intubating patients early under the premise that early intubation allowed for more controlled circumstances and would provide superior lung protection for the patient compared with spontaneous breathing (6, 7) . These recommendations led to a rapid rise in ventilator utilization, at one point threatening to overwhelm available mechanical ventilator resources worldwide (8) . However, recognition that intubation and mechanical ventilation have inherent risks (ventilator-associated pneumonia, airway injury, ventilator-associated lung injury, and hemodynamic disturbances caused by positive pressure ventilation) has led to a state of relative equipoise about whether the early intubation strategy is indeed better for patients. This has led some experts to recommend abandoning the early intubation strategy (7) . Despite these recommendations, the optimal threshold regarding when to intubate patients with COVID-19 pneumonia remains unclear.
We did a retrospective analysis at our institution, aiming to evaluate the association between timing of intubation and outcomes among critically ill patients with COVID-19. A total of 128 ICU patients who tested positive via polymerase chain reaction for COVID-19 were evaluated in our hospital from period of March 15 to May 30, 2020. Eight patients were excluded as they were transferred to an outside hospital or still admitted at the time of data analysis. Patients who did not require intubation were also excluded. In the final sample of 111 patients, 76 (68%) were intubated early in the disease course. Early need for intubation was defined as intubation either at admission or less than 2 days since the onset of increased oxygen requirements-that is the time since the patients require more than 50% of Fio 2 , that is greater than 10 L nasal cannula or nonrebreather masks or high-flow nasal cannula (HFNC-device capable of delivering up to 100% heated and humidified oxygen at a flow rate of 30-60 L/min) or noninvasive positive pressure ventilation. All intubations greater than or equal to 2 days following the onset of increased oxygen requirements were considered late intubations. This study was approved by the institutional review board at Albert Einstein Medical Center, Philadelphia (IRB-2020-458).
There were no fixed protocols or predefined criterion for intubation. The decision to intubate was left to the discretion of the attending intensivist who responded in accordance with patients' individual needs and clinical status. Most of the clinicians practiced early intubation strategy during the beginning of the pandemic. However, as more data emerged and recommendations changed (7), clinicians were more comfortable with monitoring patients on noninvasive modes of oxygenation (such as HFNC or noninvasive positive pressure ventilation). The clinician's judgment to initiate mechanical ventilation was influenced by multitude of factors including oxygen saturation, respiratory rate, work of breathing, mental status, and hemodynamics.
The mean age (± sd) of the sample population was 68 ± 11.28 years. Forty-six percent were female, and 65% were African American. Common chronic comorbidities included hypertension (87%), diabetes mellitus (57%), and chronic obstructive pulmonary disease (14%). Demographic and clinical characteristics are provided in Table 1 . The mean age among those who required early intubation was significantly higher compared with those who underwent late intubation (69.79 ± 12.15 vs 65.03 ± 8.37 yr; p = 0.038). There were no other significant demographic differences between the two groups. Inflammatory markers such as serum ferritin, d-dimer, CRP, procalcitonin, and LDH were similar between the two groups. There was no difference in the Pao 2 :Fio 2 ratio between groups at the time of intubation. There were no significant differences in COVID-19 specific treatments such as hydroxychloroquine, steroids, tocilizumab, or, remdesivir between groups. The rates of early intubations remained high throughout the study period with a slow decline in the end. An opposite trend was observed in number of late intubations (Fig. 1) . This was in accordance with the change in intubation practices as described above.
Prior studies have shown that age is an important predictor of mortality (5, 6) . Under these circumstances, it would seem plausible that mortality might be higher in those who were intubated early. Although there was a trend toward increased mortality (71% vs 57%; p = 0.194) and increased days on a ventilator (10.41 ± 7.53 vs 8.00 ± 7.82; p = 0.125) among patients who required early intubation compared with those who underwent late intubation, this was not statistically significant.
The trajectory of decline in clinical status was rather rapid among patients requiring early intubation. This was evident from the fact that the median days to intubation were significantly lower in patients who were intubated early compared with those intubated late (1 vs 4.5 d; p < 0.001). Time that the patients spent on more than 50% of Fio 2 (> 10 L nasal cannula or nonrebreather masks or HFNC or noninvasive positive pressure ventilation) was also significantly lower among patients requiring early intubation (0.38 d vs 3 d; p < 0.001).
With respect to respiratory mechanics, there was a significantly higher mean compliance in patients who required early intubation (Fig. 2) . There was also a significantly lower plateau pressure at less than 24-hour time period among patients undergoing early intubation (25 vs 27 cm H 2 O; p = 0.024) (Fig. 2) . The respiratory compliance noted in both the groups was comparable with those reported in earlier cohorts (2, 9, 10) . This variability in respiratory mechanics suggests a significant heterogeneity in the disease process as described earlier by Gattinoni et al (11) , although the lung compliance in their group was much higher (9) . They categorized the patients with COVID-19 pneumonia in two different phenotypes-phenotype L characterized by low elastance (high compliance) and phenotype H characterized by high elastance (low compliance). Although both phenotypes have been observed, the clinical significance of having one phenotype versus another is not well studied. Both groups in our cohort were noted to have similar Pao 2 :Fio 2 ratio suggesting that the extent of lung injury was similar between groups independent of whether lung compliance or plateau pressure was similar.
In this case, the absence of an outcome difference between groups in this setting at worst supports that delaying intubation was not harmful to patients when compared with early intubation. Sequential Organ Failure Assessment (SOFA) score has been associated with higher mortality among patients admitted to the ICU (10). In our study, the patients who required early intubation had a significantly higher SOFA score on hospital admission and ICU admission when compared with the late intubation group (6.51 vs 3.48; p ≤ 0.0001) and (8.15 vs 6.29 p = 0.005), respectively. However, no difference in mortality was noted among the two groups.
We acknowledge that there are several limitations to our study. First, this study is a single-center observational study with a relatively small cohort of patients. The decision to intubate was left to the discretion of the ICU team, and we did not have a predefined criterion for intubation specifically for patients with COVID-19 pneumonia. Nonetheless, the analysis provides an early, pragmatic evaluation of outcomes associated with COVID-19-associated respiratory failure and intubation timing. The factors highlighted in our study will provide some guidance for future prospective study planning.
In conclusion, we suggest that the timing of intubation does not seem to be significantly associated with poor clinical outcomes in critically ill patients with COVID-19. The timing of intubation seems to be driven mainly by disease severity and rate of progression. Hence, trial of noninvasive strategies of oxygenation in an attempt to avoiding intubation might not be harmful. However, larger prospective studies are needed to fully elucidate these effects ( Table 2 The study was approved by Institutional Review Board at Albert Einstein Medical Center-Philadelphia.
|
response in premature infants with mild-to-moderate respiratory distress. In this regard, we agree with the comment by Madney et al (1) that the experimental period of 3 hours may be considered too short to investigate the full treatment response to a single dose and the possible need for repeat dosing. Indeed, after achieving our objective of identifying an appropriate dose of surfactant nebulization for the treatment of respiratory distress in our preclinical setting and demonstrating the safety of the technique using an e-Flow nebulizer (2), we undertook a long-term study to evaluate the efficacy of this technique over the critical period of 72 hours after surfactant administration (7). Our results confirm that the nebulization of 400 mg/kg of poractant alfa is effective in our animal model, given the 50% lower risk of respiratory failure (requiring intubation and mechanical ventilation) in the first 72 hours after surfactant administration treatment than nCPAP alone. Unfortunately, we did not test the repeat dosing option in this study setting.
Finally, the authors raise an important question regarding the translation of the results obtained in our preclinical study to clinical practice. It is known that to better understand the pathophysiology of neonatal RDS and verify novel treatment approaches, RDS animal models have been extensively used for many years, and we would like to argue are still very much needed today. Nonetheless, while it is true that animal models provide a valuable bridge between laboratory research and the clinic, we agree, of course, that translating our results (and indeed the results from any preclinical experimental model) to human infants requires caution. Of note, a clinical trial to investigate the safety, tolerability, and efficacy of nebulized poractant alfa has been recently closed European Union Drug Regulating Authorities Clinical Trials Database Number: 23016-004547-36. More details on the outcomes of this clinical trial, and perhaps other future studies in the field, will be needed to understand the risk-benefit profile for this therapeutic option and, in turn, the potential role of surfactant nebulization in premature infants with RDS.
Drs. Rey-Santano's, Mielgo's, and Gomez-Solaetxe's institution received funding from Chiesi Farmaceutici and Carlos III Health Institute (PI14/00024), and they disclosed off-label product use of vibrating-membrane nebulizer (eFlow-Neos). Dr. Salomone disclosed off-label product use of aerosolized poractant alfa. Drs. Salomone and Bianco received funding from Chiesi Farmaceutici. T he outbreak of infections by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV2) was officially declared a Public Health Emergency of International Concern by the World Health Organization (WHO) on January 30, 2020, after the initial cases in China continued to rise and new cases started to be reported from several other countries in Asia and Europe. On March 12, 2020, with over 20,000 cases and almost 1,000 deaths in the European region, the WHO declared the outbreak a pandemic (1) . To this day, the number of SARS-CoV2 infections (coronavirus disease 2019 [COVID-19]) continues to rise worldwide bringing along an alarming number of deaths.
The Global Preparedness Monitoring Board of the WHO, in its 2019 "A World At Risk Report," stated that although progress had been made, worldwide efforts to face a health emergency remained "grossly insufficient" (2). Although ICUs have had to prepare for pandemic situations in the past and guidance has been provided by professional societies (3), the current unprecedented situation continues to overwhelm healthcare systems and economies around the world. Governments are facing socioeconomic, logistic, and organizational challenges that may change the way our societies and healthcare systems function forever.
The availability of ICU beds varies greatly among countries (4) and depends on several factors such as the number of beds per 100,000 habitants, the socioeconomic status, the prevalence of chronic illnesses, and overall health status of the population and management choices (i.e., different admission and discharge criteria).
During the current pandemic, special attention has been paid to ICU bed and ventilator availability. Many hospitals in Italy, Spain, the United Kingdom, or the United States have been forced to expand their ICUs outside of their regular spaces using nonconventional locations like operating theaters, wards, or postoperative care units as ICUs. These ad hoc spaces commonly lack the complex architecture and resources of a conventional ICU making logistics challenging.
Although healthcare systems around the world have been able to expand their capacity in terms of ICU beds, ventilator availability has been a major problem. With a saturated global market that cannot meet the demand of these high-tech devices, physicians and respiratory therapists have turned to alternative management strategies including careful selection of the patients who will benefit the most from invasive ventilatory support, extended use of noninvasive support, off-label use of noninvasive ventilation devices, or anesthesia machines for invasive mechanical ventilation or even ventilator splitting (i.e., using one ventilator to support two or more different patients) (5). Likewise, governments, academic institutions, private companies, and individuals have made an enormous effort to increase the offer of ventilators including, in some cases, homemade devices.
With an increased ICU bed capacity and ventilator availability, the next challenge arises: critically ill adults with COVID-19 are highly complex patients who have important requirements of specialized ICU management, including nursing, respiratory support, and supportive care. The increased complexity along with the elevated number of patients requiring intensive care adds up to the high number of healthcare workers affected by COVID-19 in creating severe staffing problems among institutions worldwide.
Although Goran Haglund established the first ever PICU in Gothenburg, Sweden, in 1955, Pediatric Critical Care Medicine is a relatively young subspecialty (e.g., the pediatric section of the Society of Critical Care Medicine was created in 1981) that has rapidly evolved into a highly complex field (6). Pediatric intensivists are highly skilled, highly specialized physicians who treat, on a day-to-day basis, severely ill children with life-threatening diseases such as congenital heart disease, trauma, and infectious diseases. PICUs are high-acuity units where children in a wide range of ages receive state of the art care around the clock, including invasive mechanical ventilation, extracorporeal life support, or continuous renal replacement therapies.
COVID-19 seems to somewhat spare children with those who show symptoms rarely evolving to need PICU admission (7). This situation leaves pediatric critical care teams relatively unexposed to the infection and with a maintained or decreased workload.
In an unprecedented situation for ICUs around the world and with healthcare systems suffering severe shortages of equipment and staff, pediatric critical care physicians can be of great value in providing temporary support to adult ICUs (8). With advanced knowledge in physiology and, specifically, respiratory support, pediatric intensivist can be integrated into ICU teams and under the constant supervision of adult ICU consultants can exceptionally perform fellowlevel tasks that may help alleviate the burden these teams are suffering.
Pediatric teams have been providing resources and logistic support to adult ICUs in our regions for the last month and a half. This process has been driven by institution-wide protocols and well-meaning improvisation with very little specific guidance, leading to significant heterogeneity in practice. Questions remain as to whether admitting adult patients to PICUs or deploying pediatric intensivists to adult ICUs should be the preferred model. We encourage professional societies from the critical care field, both adult and pediatric, to develop and distribute consensus statements that at a national level may provide help and support on how to integrate mixed teams in which pediatric intensivists can have clearly defined roles and responsibilities. (2) highlight an important alternative role for pediatric intensivists outside the PICU in supporting adult ICUs in the fight against the COVID-19 pandemic. Pediatric intensivists are comprehensively trained in principles of critical care (e.g., respiratory physiology and mechanical ventilation) which can be easily transposed to adult patients making them qualified to oversee care in an adult ICU as described (2) . Our recent perspective (1) should provide pediatric providers with clinical guidance important in caring for adult patients with COVID-19 and highlight common adult situations rarely encountered in pediatrics. This guidance can be applied in an adult or pediatric hospital. A number of the issues raised by Christian and Kissoon (3) can be overcome if pediatric intensivists oversee the care of adults with COVID-19 in a primary adult setting.
This appears to be the strategy used in Spain and Italy (2) . An interesting alternative recently reported (4) is the care of adults with COVID-19 within a PICU located in a primarily adult hospital. In these situations, the hospitals had in place the supplies and systems needed to care for adults. Likewise, there exist academic pediatric hospitals which are connected by halls or bridges to adult centers readily permitting the use of adult consultants and equipment/supplies overcoming many of the challenges pointed out by Christian and Kissoon (3) . We agree with these authors that these approaches are preferred prior to bringing adults into a PICU where the care of adults is uncommon.
In a COVID-19 surge, one must consider whether the scarcity lies in trained personnel or appropriately equipped critical care settings, or both. Admitting adults to a PICU in a children's hospital is sensible when ICU equipped spaces with optimal monitoring, gases, vacuum, etc. are scarce. This avoids creating ad hoc ICUs in schools or stadiums which have been proposed for surge capacity but have clear limitations. Adults brought to a pediatric setting may benefit from services uncommon in adult hospitals such as pet, art, music, and "child life" therapies and rooms designed to permit a family member to remain during the hospitalization. Thousands of adults have died in heartbreaking isolation from their loved ones without any form of solace in their final days-a situation rarely permitted in pediatric hospitals.
A "one size fits all approach" is unlikely to be universally effective or feasible during this pandemic. However, the pandemic does provide an opportunity to consider related nonpandemic patient care issues such as where and how to care for adults with congenital heart disease, cystic fibrosis, sickle cell, or muscular dystrophies where pediatric providers/hospitals may have greater expertise. We appreciate the innovative approaches and dedication exhibited by our colleagues in Spain and Italy as they bravely confront this pandemic and prove that pediatric intensivists can save lives regardless of the age.
To the Editor: W e read with great interest the article by Carcillo et al (1) published in a recent issue of Pediatric Critical Care Medicine. Studies with biomarkers in adults with sepsis and acute respiratory distress syndrome (ARDS) have shown that identification of specific subphenotypes could lead to a better identification of patients that could be more responsive to interventions. In ARDS, a combination of biomarkers and clinical data improved the understanding of the patient profiles and may influence entry criteria of clinical trials. Recent trials support that the presence of ARDS subphenotypes may demand distinct treatment approaches, regarding, for example, fluid management or other specific therapies (2) . In Statins for Acutely Injured Lungs for Sepsis study (3), two subphenotypes (hyper-inflammatory subphenotype or not) were tested for distinct treatment response to statin. Although the use of statin did not show treatment effect, hyperinflammatory" subphenotype patients had higher mortality
|
and Bassi, in which they propose to use the term "chilblain-like lesions" instead of 27 "acro-ischemic lesions" to denominate the acral lesions in COVID-19 patients 1 . At the 28 time of analyzing our study, only two papers reporting "acro-ischemia" in COVID-19 29 patients had been published, and thus we acquired the same name. In the next days, 30 numerous articles were published reporting these acral lesions with different terms, 31
including: chilblain-like lesions, chilblains, pseudo-chilblain, erythema pernio-like, 32 perniosis-like, vascular skin symptoms, vascular acrosyndromes, COVID-19-induced 33 chilblains, or chilblains of lockdown, among others. 34
Dermatology has been traditionally a morphologic and descriptive specialty, and we 35 still use a plethora of ancient names based on morphology of skin lesions. We are prone 36 to create new terms by adding the prefix pseudo-, or the suffix -like to original entities' 37 names. This is usually due either to clinical or histological resemblance to the original 38 entities, or to an incomplete understanding of their pathophysiology 2 . It is true that 39 many of the reported cases are morphologically similar to classical chilblains or pernio. 40 However, several articles from different countries reported acral lesions with little or no 41 resemblance to chilblains, also affecting other areas than just the fingers: yellowish-42 erythematous plaques on the heels, targetoid pink plaques on the dorsum of feet, hands 43 or elbows, swollen and violaceous toes, or acral non-necrotic purpura. We also found in 44 our study a pattern with coalescing macules and vesicles, some of them with targetoid 45 appearance, that do not fit in the classical chilblain description 3 . 46
As Piccolo et al. stated, etymology of the word "chilblains" includes chill-(cold). 47
However, most of the reported cases in COVID-19 times have not been related to cold 48 exposure. Given the suggested alterations in coagulation, endothelial dysfunction and 49 thrombotic response associated with COVID-19 4 , it is not unreasonable to think that 50 similar stimuli (with different intensity) may play a role in these acral lesions both in 51 asymptomatic and hospitalized patients. In fact, a recent French study did find "vascular 52 microthrombi" in two biopsies of non-hospitalized patients with chilblain-like lesions 5 . 53
We believe that acral skin lesions in COVID-19 patients are a continuum ranging from 54 subtle erythematous macules, chilblain-like lesions, to gangrene or digital ischemia. It is 55 possible that multiple etiological factors are involved in the development of COVID-19 56 acral and non-acral skin lesions, including both coagulation disorders and immune 57
We agree that an international etymological consensus should be created to group these 59 skin manifestations, at least until the exact pathogenesis is elucidated. Chilblain-like 60 lesions is the most used term nowadays. It is a morphological term that is better than the 61 etiological term acro-ischemic lesions. However, it is not a perfect term, as it would 62 cover the majority of the skin manifestations, but not all of them. 63 64
|
Viruses are obligate intracellular parasites that must rely on the cellular function for each stage of their life cycle [1, 2] . To successfully enter a cell, enveloped viruses bind to surface-specific receptors through their transmembrane glycoproteins and subsequently activate intracellular signaling transduction to initiate entry; non-enveloped viruses bind through the capsid surface or projections from the capsid [3] . Viral penetration into the host cell is followed by genome uncoating, genome expression and replication, assembly of new virions, and their egress [4] . To maintain homeostasis, a fundamental function of the membrane-bound organelles is used as a scaffold to compartmentalize cellular trafficking and secretory signaling. Upon viral infection, the membranes of the intracellular organelles are remodeled and utilized by viruses as platforms to coordinate the accumulation of viral and cellular components required for efficient replication [4, 5] .
In addition to the rearrangement of intracellular organelles, massive viral infection also leads to the accumulation of damaged organelles, misfolded proteins, and other macromolecules. Autophagy is a conserved catabolic multistep process that non-selectively or selectively delivers large cytoplasmic proteins, including damaged organelles, into specific double-membrane autophagosome vesicles, and shuttles to the vacuole/lysosomes for degradation and recycling [6] . The process of autophagic regulation is divided into several steps: initiation, elongation, fusion, and degradation [7] . The specific targeting of cytoplasmic substrates for degradation through autophagosome depends mainly on specific cargo receptors, which contain an LC3-interacting region (LIR) motif and ubiquitin binding domain [8] . To date, several adaptor proteins, including p62/SQSTM1 [9] [10] [11] [12] , AMBRA1 [13] , NBR1 [14] , optineurin/OTPN [15, 16] , TAXIBP1 [17] , CALCOCO2/NDP52 [18] , BNIP3L/NIX [19, 20] , and BNIP3 [21] , PHB2 [22] , FUNDC1 [23] , Cardiolipin [24] , and FAM134B [25] , have been identified as being involved in the recognition of cargo substrates for degradation. Most viruses activate and utilize the autophagic machinery for infectious progeny with notable exceptions, such as sindbis virus (SIV) [26, 27] , herpesviruses (α-, β-, and γ-) [28] [29] [30] , human parainfluenza virus typ3 (HPIV3) [31] , and human immunodeficiency virus type 1 (HIV-1) [32] . During viral infection, based on degradation substrates such as the mitochondria, peroxisome, endoplasmic reticulum (ER), lysosome, and nucleus, the selective autophagy of organelles is called mitophagy, pexophagy, ER-phagy, lysophagy, and nucleophagy, respectively [33, 34] .
Given the importance of membrane biogenesis in the interplay between the virus and the organelle, in this review, we briefly summarize our current knowledge about viruses' modification of membranes morphology and biogenesis of intercellular organelles to support viral infection progeny. Moreover, we describe the potential roles of selective autophagy in the regulation of intracellular organelles upon viral infection.
To maximize their viral replication and evade host antiviral responses, viruses have evolved a plethora of strategies to hijack cellular organelles [1, 35, 36] . Each step of viral replication is closely accompanied by the rearrangement of intracellular organelles.
The mitochondria are highly dynamic organelles and form interconnected tubular networks, undergoing a balance between fusion and fission in response to intracellular and/or extracellular stresses [37] (Figure 1A ,B). Mitochondrial fusion involves two sets of key GTPase proteins in mammals: the mitofusin GTPases (Mfns) (Mfn1 and Mfn2) of the outer mitochondrial membrane (OMM) and optic atrophy 1 (OPA1) of the inner mitochondrial membrane (IMM) [38] [39] [40] [41] [42] . The Mfns mediate OMM fusion and cristae integrity [43] . However, the OPA1 mediates IMM fusion and cristae integrity by regulating of the mRNA splicing forms, membrane potential, and the adenosine triphosphate (ATP)-dependent diverse cellular proteases [39] [40] [41] . Subsequently, OMM fusions are followed by IMM fusion processes, resulting in the concomitant mixing of the mitochondrial contents and merging of two individual mitochondria. In a previous study, Cipolat et al. identified that OPA1 specific functional cross-talk with Mfn1 rather than Mfn2 is involved in the mitochondrial fusion of OMM [44] . Mitochondrial fission is a complex process that includes two distinct steps: an initial constriction of mitochondrial membranes and membrane scission. The initial constriction step narrows the mitochondrial tube diameter at the ER-mitochondria intersection zones where ER tubules wrap around the OMM. Manor et al. suggested that actin-nucleating protein spire 1C localizes to the mitochondria, directly links the mitochondria to the actin cytoskeleton and the ER, and finally promotes actin polymerization at the ER-mitochondria intersections [45] . The membrane scission of the mitochondria is primarily regulated by dynamic relative GTPase protein (DRP-1) [46] . The mitochondrial localization of DRP-1 is a cytosolic factor promoting mitochondrial fission, which powers the constriction and division of the mitochondria primarily through post-translational modification (e.g., phosphorylation) (reviewed by Lee et al. [47] ). Recent studies have reported that the recruitment of DRP-1 in mammalian cells requires several accessory proteins, such as the mitochondrial fission protein 1 (Fis-1) and mitochondrial fission factor (Mff) [48] . Although such proteins are proposed to constitute the fission complex of the mitochondria, mediating mitochondrial fission using this complex has remained unclear.
Viruses have evolved several strategies to remodel the mitochondria for viral replication and assembly, including spatial distribution, morphology remodeling, and metabolism reprogramming. To maximize the effectiveness of DNA replication, African swine fever virus (ASFV) infection recruits the mitochondria around the viral factories, associated with the morphology change and accumulation The morphological diagram of the mitochondria. Mitochondria form a dynamic network pool, which constantly undergoes rearrangement and turnover. The equilibrium regulation of mitochondrial fusion-fission is essential to maintain the integrity of mitochondria [59] . The morphology of mitochondria was divided into hyper-fused (elongated), tubular (normal), short tubes, and fragmented [50] . (B) Regulation of mitochondrial fusion and fission. Mitochondrial fusion is mediated by mitofusin GTPases MFN1 and MFN2 at the outer mitochondrial membrane (OMM), and OPA1 at the inner mitochondrial membrane (IMM). Mitochondrial fission is driven by the fission machinery complex, which consists of DRP-1, Fis1, and MFF. Mitochondrial hyper-fusion is a prosurvival type, which can increase the ATP production and membrane potential (Δψm), and decrease reactive oxygen species (ROS) and mitophagy [50, 59] . (C) Proposed model for the nuclear aggregation of mitochondria and the possible interplay among intracellular organelles in response to virus infections. * symbol indicates the possible interaction site. Representative virus that increases the interactions among intracellular organelles is shown with purple rectangle. African swine fever virus, ASFV; Rubella virus, RUBV; Bunyamwera virus, BUNV. The morphological diagram of the mitochondria. Mitochondria form a dynamic network pool, which constantly undergoes rearrangement and turnover. The equilibrium regulation of mitochondrial fusion-fission is essential to maintain the integrity of mitochondria [59] . The morphology of mitochondria was divided into hyper-fused (elongated), tubular (normal), short tubes, and fragmented [50] . (B) Regulation of mitochondrial fusion and fission. Mitochondrial fusion is mediated by mitofusin GTPases MFN1 and MFN2 at the outer mitochondrial membrane (OMM), and OPA1 at the inner mitochondrial membrane (IMM). Mitochondrial fission is driven by the fission machinery complex, which consists of DRP-1, Fis1, and MFF. Mitochondrial hyper-fusion is a pro-survival type, which can increase the ATP production and membrane potential (∆ψm), and decrease reactive oxygen species (ROS) and mitophagy [50, 59] . To date, several reports have argued the role of Mfns in innate immunity [60] [61] [62] . The interaction of Mfns with the adaptor mitochondrial antiviral signaling protein (MAVS) (also called IPS-1, Cardif, or VISA) at the mitochondrial associated membrane (MAM) leads the initiation of the IFN signaling pathway [63, 64] . Meanwhile, MAVS was also reported to interact with MFN2, which leads to the inhibition of inflammatory cytokine production, suggesting the MAM plays a complex role in the regulation of innate immunity [61] (detailed in review [64] ). Castanier et al. also identified the cross-modulation relationship between mitochondrial dynamic and retinoic acid-inducible gene I protein (RIG-I) like receptor (RLR) signaling activation [60] . Certain viruses, such as influenza A virus (IAV) [65] , measles virus (MV) [66] , hepatitis B virus (HBV) [67] [68] [69] [70] , and hepatitis C virus (HCV) [67] , induce selective autophagy to degrade fragmented mitochondria and evade innate immunity. Meanwhile, the non-structural (NS) protein 4B of DENV induces mitochondrial elongation via inactivation of DRP-1 and dampens the activation of RLR signal pathway to promote replication [53] .
Similarly, the open reading frame-9b (ORF-9b) encoded by SARS-CoVs causes mitochondrial elongation via triggering DRP-1 degradation, and inhibits RLR signaling [54] .
Collectively, viruses have evolved several strategies to hijack the mitochondria for viral genome replication and assembly, including the remodeling of mitochondrial morphology and distribution, the regulation of the fusion-fission machinery complex, and the synthesis of ATP production.
The ER, a single continuous membrane, consists of two primary structural subdomains: the nuclear envelope and the peripheral ER (a polygonal network) [71] . The nuclear envelope of ER consists of two flat membrane bilayers; the peripheral ER is composed of membrane cisternae and dynamic interconnected tubules [71, 72] . The ER is the largest intracellular endomembrane system and has multiple complex functions, including Ca 2+ storage, fatty acid synthesis, ion homeostasis, and, in particular, the quality control of newly synthesized proteins [73] . The accumulation of misfolded or unfolded proteins in the ER lumen is known as ER stress [74] . UPR and ER-associated degradation (ERAD) signaling are central to maintain the quality control of the ER [74, 75] . The UPR is a signaling cascade aimed at eliminating misfolding proteins and increasing folding capacity in lumen [74] . The protein-folding conditions in the ER lumen is primarily sensed by three integrated signaling transducers: activating transcription factor 6 (ATF6) [76] , double-stranded RNA-activated protein kinase-like kinase (PERK), and inositol requiring enzyme 1α (IRE1α) [58, 77] (Figure 2 ). Each branch uses a distinct mechanism to drive the transcription of UPR signal transduction, such as ATF6 by regulated proteolysis, PERK by translational control, and IRE1 by non-conventional mRNA splicing [77] . By contrast, ERAD recognizes misfolded proteins and retro-translocates such proteins into the cytoplasm for degradation by the ubiquitin-proteasome-dependent ERAD and the autophagy-lysosome dependent ERAD [75, 78] .
A series of studies has reported that viral infections reshape the morphology and membrane remodeling of ER [1, 71] , and exploit various strategies to hijack the three branches of UPR for viral replication ( Figure 2 and Table 1 ). The possible explanations were summarized as follows: first, the large malleable surface area of ER is used as a physical scaffold to protect viral RNA from degradation by cellular mRNA decay machinery [73, 79] . RNA viruses have evolved several strategies to avoid the cellular mRNA decay machinery [79] . Second, viruses, particularly most RNA viruses, remodel the ER membrane to form a variety of structures for infectious progeny [5] , including single-membrane spherule vesicles, double-membrane vesicles, convoluted membranes, and single-membrane sheets in the ER lumen [71] . Tenorio et al. identified that δNS and µNS of reovirus caused tubulation and fragmentation of the ER, respectively, to re-build replication sites [80] , indicating that viral proteins play different roles in the rearrangement of ER membranes. Similarly, the NS4A of DENV induces the membrane arrangement of ER lumen in a 2K-regulated manner [81] . Third, viruses recruit the ER membranes into the replication and assembly compartments. The viral cytoplasmic replication site of VV [82, 83] , equine arteritis virus (EAV) [84] , and polivirus (PV) [85] is derived directly from the ER membrane.
Moreover, ASFV structural protein p54 plays an important role in the recruitment and transformation of the ER membranes into the envelope precursors [86] . Fourth, viruses increase the capacity and spatial rearrangement to increase ER biogenesis, including membrane protein synthesis, fatty acid change, and Ca 2+ storage [73] . For enveloped viruses, the key molecular chaperone of ER, including Bip/GRP78 and calnexin/calreticulin, assists the folding of the extracellular domains of viral membrane glycoproteins, such as GP2a of PRRSV [87] , and hemagglutinin-neuraminidase (HN) and fusion (F) proteins of NDV [88] , when they translocate into the lumen of the ER. Meanwhile, the reprograming of ER biogenesis, such as Ca 2+ storage, is required for viral replication, including HCV [89] and ASFV [90] . Fifth, viruses co-opt or subvert the ERAD processes to re-establish ER homeostasis, which actively exports the malformed proteins from the ER for degradation. Human cytomegalovirus (HCMV) [91] and IAV [92] exploit the ERAD pathway to benefit viral replication. Finally, the membrane remodeling of ER may suppress the activation of host immunity. Upon viral infections, particularly DNA viruses, stimulator of interferon genes (STING), an activated ER adaptor of the cyclic GMP-AMP synthase (cGAS)-STING signaling pathway, translocates from the ER to the ER-Golgi-intermediate compartment (ERGIC) and the Golgi apparatus, and then activates downstream molecules [93] [94] [95] . Therefore, we speculate that the morphology remodeling and membrane modification of ER induced by viruses may be involved in the regulation of STING trafficking, EARD degradation, and post-translational modification, and eventually evade the activation of cGAS-STING pathway ( Figure 3 ). During different viral infections, the ER stress activates the three stress sensor proteins: IRE1α, ATF6, and PERK (review in the references [77, 96] of the cyclic GMP-AMP synthase (cGAS)-STING signaling pathway, translocates from the ER to the ER-Golgi-intermediate compartment (ERGIC) and the Golgi apparatus, and then activates downstream molecules [93] [94] [95] . Therefore, we speculate that the morphology remodeling and membrane modification of ER induced by viruses may be involved in the regulation of STING trafficking, EARD degradation, and post-translational modification, and eventually evade the activation of cGAS-STING pathway ( Figure 3 ). During different viral infections, the ER stress activates the three stress sensor proteins: IRE1α, ATF6, and PERK (detailed in reviews [77, 96] ). Each sensor uses a distinct mechanism of signal transduction to drive the transcription of UPR target genes and eventually work as feedback loops to mitigate the ER stress [77, 96] . Upon ER stresses, ATF6, a transcriptional factor, translocate into the Golgi compartment, where it is cleaved by the site (1/2) protease. The N-terminal cytosolic domain of cleaved ATF6 is released into cytosol and then translocated into the nucleus where it binds to ER stress-response elements to activate target genes, including XBP-1 and C/EBP-homologous protein (CHOP) [76] . The activation of PERK inhibits Figure 2 . Simplified diagram of the core element of the three unfolded protein response (UPR) signaling branches of the endoplasmic reticulum (ER). During different viral infections, the ER stress activates the three stress sensor proteins: IRE1α, ATF6, and PERK (detailed in reviews [77, 96] ). Each sensor uses a distinct mechanism of signal transduction to drive the transcription of UPR target genes and eventually work as feedback loops to mitigate the ER stress [77, 96] . Upon ER stresses, ATF6, a transcriptional factor, translocate into the Golgi compartment, where it is cleaved by the site (1/2) protease. The N-terminal cytosolic domain of cleaved ATF6 is released into cytosol and then translocated into the nucleus where it binds to ER stress-response elements to activate target genes, including XBP-1 and C/EBP-homologous protein (CHOP) [76] . The activation of PERK inhibits general protein translation by the phosphorylation of eIF2α, enabling dedicated translation of transcripts, including ATF4, a key transducer. The IRE1 branch is regulated by non-conventional mRNA splicing [77, 96] . Subsequently, the activated IRE1 processes XBP1 mRNA to generate the spliced form of XBP1 protein (XBP1s), which participates in the IRE1α-mediated UPR pathway in response to ER stresses [77, 96] . Eventually, the activation of the cleaved ATF6 (N-ATF6), ATF4, and XBP1 transcription factors increases the protein-folding capacity in the ER lumen. Meanwhile, IRE1 and PERK sensors also decrease the load of proteins entering the ER [77,96].
The peroxisomes are single membrane-bounded organelles that function in numerous metabolic pathways, including β-oxidation of long-chain fatty acids, detoxification of hydrogen peroxide, and synthesis of ether phospholipids and bile acids [113, 114] . Notably, the mitochondria and peroxisomes share common functions in the β-oxidation of fatty acids and the reduction of damaging peroxides. Proliferation of peroxisome is largely mediated by growth and division. Peroxisomal division in mammalian cells comprises multiple processes, including membrane deformation, elongation, constriction, and fission [115] . With the exception of peroxin (PEX)-11, the peroxisomes and mitochondria share common fission machinery, including DRP-1, Mff, and Fis1 [116, 117] . The fission machinery of peroxisome is orchestrated by PEX-11β and mitochondrial fission factors [115] .
Mitochondrial-derived vesicles (MDVs) are involved in the transportation of mitochondrial-anchored protein ligase (MAPL), a mitochondrial outer membrane, to peroxisomes [118] . The retromer complex containing vacuolar protein sorting (Vps) 5, Vps 26, and Vps 29, a known component of vesicle transport from the endosome to the Golgi apparatus, also regulates the transport of MAPL as a binding partner from the mitochondria to peroxisomes [119] .
Viruses regulate the morphology and biogenesis of peroxisomes to promote progeny replication [35] . For instance, the C-terminal of the rotavirus VP4 protein is directly located in peroxisomes via its conserved peroxisomal targeting signal [120] . Meanwhile, viruses have exploited the myristoyl-CoA produced by peroxisome biogenesis for the N-myristoylation of viral proteins [35] , such as ASFV [121] , indicating that peroxisomal lipid metabolism contributes to viral replication. Another typical example is the tomato bushy stunt virus (TBSV), a member of the Tombusviridae family, which infects a variety of plant species. McCartney et al. reported that TBSV induced the rearrangement of peroxisomes and caused vesiculation of the peroxisomal membrane, where it provided a scaffold for virus anchoring and server as the sites of viral RNA synthesis [122] . In the absence of peroxisomes, TBSV also exploits the surface of the ER membrane as a viral factory for viral replication and assembly [123] . It is suggestive of the remarkable flexibility of the virus to use host membranes for replication.
The Golgi apparatus is a highly dynamic organelle whose function primarily includes saccule formation and utilization of such saccules in vesicle formation at the opposite face for delivery [124] . The normal cellular secretory pathway, bidirectional transport between the ER and Golgi apparatus, is mediated by tubulovesicular transport containers that depend on two coat protein complexes, COP-I and COP-II. COP-II establishes a membrane flow from the ER to the Golgi complex [125] . However, COP-I coat acts in retrograde transport from the Golgi back to the ER [126] .
In general, certain viruses utilize the cellular trafficking and secretory pathway of the ER-Golgi transport system to replicate/release their progeny [1] . PV utilizes the components of the ADP-ribosylation factor (Arf) family of small GTPases [127] and cellular COP-II proteins [128] to form membrane-bound replication complex for viral replication, indicating that PV hijacks the components of the cellular secretory pathway for replication. As shown in Table 2 , metonaviridae [56] and peribunyaviridae [57] hijack the Golgi complex to re-construct it as a viral factory for viral replication. For example, RUBV [56] and BUNV [57] infections modify cell structural morphology and remodel the Golgi complex as a viral factory during the entire life cycle. Furthermore, host secretory signaling is also crucial for innate and acquired immune responses, such as the exportation of proinflammatory and antiviral cytokines. Nearly 25 years ago, Doedens et al. reported that the 2B and 3A proteins of PV inhibited cellular protein secretion by directly blocking the transportation from the ER to the Golgi apparatus [129] , indicating that the functional secretory protein is not indispensable for viral RNA replication. Mechanistically, Dodd et al. identified that the inhibition of 3A protein of PV on the ER to Golgi limited the antiviral cytokine secretion of native immune response and inflammation, including interleukin-6, interleukin-8, and β-interferon [130] . Deitz et al. also identified that PV 3A protein reduced the presentation of new antigens on the cell surface [131] . Considering that the ER adaptor STING was also located on the Golgi and ERGIC [93] , we hypothesized that the membrane remodeling and modification of Golgi induced by viruses might also be involved in the regulation of cGAS-STING pathways ( Figure 3) . Collectively, all these data suggest that enteroviruses, such as PV and CVB, have evolved a series of strategies to hijack cellular trafficking and secretion for viral replication.
The lysosome, an acidic and membrane-bound organelle, acts as a cellular recycling center and is filled with a number of hydrolases [152] . The lysosome-based degradation processes are subject to reciprocal regulation [153] . Lysosomes degrade unwanted materials that are delivered either from outside via the endocytic pathway or from inside via the autophagic pathway [153, 154] . For viral replication and assembly, certain viruses, including Alphaviruses [146] , such as semliki forest virus (SFV) [145] , exploit the membrane surface of the endosome and lysosome as a viral factory. Similarly, RUBV also can modify the membrane of lysosomes for a viral factory [142] . Meanwhile, the Toll-like receptors (TLR), such as TLR 3/7/9, are located on the endosome, indicating that the endosome also plays an important role in innate immunity [94] . Therefore, we speculate that another possible strategy is that viruses, particularly DNA viruses, evade the TLR-mediated activation of the NF-κB and transcription of proinflammatory cytokines. HBV infection suppresses TLR-9 expression and prevents TLR9 promoter activity in human primary B cells [155] . Interestingly, DENV, a positive-stand RNA virus, activates the TLR9 by inducing mtDNA release in human dendritic cells [156] . Additionally, the endosomal-lysosomal sorting system is a complex and dynamic vesicular sorting signaling, which is fundamental to maintain homeostasis [157] . Viruses, particularly enveloped viruses such as HIV [151] , have evolved several strategies to hijack the endosomal sorting complex required for the transport (ESCRT) complex to facilitate viral budding. Collectively, all these data indicate that different viruses utilize different strategies to hijack the endosome/lysosome for viral progeny.
Macroautophagy was initially described as a non-selective degradation process [33] . However, selective autophagy is characterized as a highly regulated and specific degradation pathway targeting damaged organelles [158] . The initiation of autophagy includes the formation of the phagophore from membrane precursors. The phagophore elongates by the ubiquitin-like conjugation systems and LC3-II-phosphatidylethanolamine to form the autophagosome [33] . The autophagosome sequesters within damaged intracellular organelles, such as the mitochondria, ER, peroxisome, nucleus, and lysosome, and undergoes fusion with a lysosome to form an autolysosome, where degradation occurs [33] . Depending on the targeted organelles, selective autophagy can be divided into mitophagy, pexophagy, ER-phage, lysophagy, nucleophagy, etc. [33] (Figure 3) . In most cases, viral infection leads to the severe damage of intracellular organelles, which subsequently initiates selective autophagy to degrade these damaged organelles. Therefore, selective autophagy is the protective mechanism for cells to maintain cell homeostasis. In contrast, in some cases, selective autophagy could be utilized by a virus to promote their replication. Here, we address the principal mechanism of selective autophagy triggered by viral infection, with an emphasis on mitophagy and pexophagy, which has been best described to date. Autophagy is a conserved catabolic process, which is artificially divided into several steps: initiation, elongation, fusion, and degradation [7] . The initiation of autophagy includes the formation of the phagophore, the initial sequestering compartment. The isolation membrane elongates and expands into a double-membrane structure called an autophagosome, which chooses its cargo (the damaged organelles indicated in this figure) . Completion of the autophagosomes is followed by fusion with lysosomes to form autolysosomes, where the degradation of the cargo occurs [33] . The cargo adaptor interacts directly with the damaged intracellular organelles and an autophagy modifier of the ATG8/LC3 family, which functions as a bridge between polyubiquitinated cargo and autophagosome. The autophagy adaptors contain at least an LC3-interacting region (LIR) motif and a C-terminal ubiquitin-associated domain, which is responsible for binding to mono-and poly-ubiquitinated substrates. The selective autophagic organelles are often marked and dissipated for degradation by the addition of ubiquitin by E3 ubiquitin ligases and deubiquitinating enzymes. The adaptor mitochondrial antiviral signaling protein (MAVS) is located in both mitochondria and peroxisome [159] , an important downstream adapter of RIG-I mediated antiviral signaling. The stimulator of interferon genes (STING) is located in the ER, an important downstream effector of the cGAS-STING pathway [94] .
The selective elimination of damaged mitochondria is termed mitophagy and is a type of macroautophagy [6] (Figure 3) . The fragmented mitochondria are easier to recognize by Autophagy is a conserved catabolic process, which is artificially divided into several steps: initiation, elongation, fusion, and degradation [7] . The initiation of autophagy includes the formation of the phagophore, the initial sequestering compartment. The isolation membrane elongates and expands into a double-membrane structure called an autophagosome, which chooses its cargo (the damaged organelles indicated in this figure). Completion of the autophagosomes is followed by fusion with lysosomes to form autolysosomes, where the degradation of the cargo occurs [33] . The cargo adaptor interacts directly with the damaged intracellular organelles and an autophagy modifier of the ATG8/LC3 family, which functions as a bridge between polyubiquitinated cargo and autophagosome. The autophagy adaptors contain at least an LC3-interacting region (LIR) motif and a C-terminal ubiquitin-associated domain, which is responsible for binding to mono-and poly-ubiquitinated substrates. The selective autophagic organelles are often marked and dissipated for degradation by the addition of ubiquitin by E3 ubiquitin ligases and deubiquitinating enzymes. The adaptor mitochondrial antiviral signaling protein (MAVS) is located in both mitochondria and peroxisome [159] , an important downstream adapter of RIG-I mediated antiviral signaling. The stimulator of interferon genes (STING) is located in the ER, an important downstream effector of the cGAS-STING pathway [94] .
The selective elimination of damaged mitochondria is termed mitophagy and is a type of macroautophagy [6] (Figure 3) . The fragmented mitochondria are easier to recognize by autophagosome than the elongated mitochondria because of imbalanced fission-fusion of the mitochondria [160] [161] [162] [163] . The canonical ubiquitin-dependent PTEN-induced putative kinase protein 1(PINK1)-Parkin mitophagy signal has been validated in multiple model systems by different approaches [164] [165] [166] . Mitochondrial PINK1 is rapidly turned over on the bioenergetically well-coupled mitochondria by proteolysis by presenilin-associated rhomboid like protein (PARL) [167] , but PINK1 is selectively stabilized on the mitochondria with loss of membrane potential (∆ ψ m) [168, 169] . Selective accumulation of PINK1 recruits downstream Parkin, a cytosolic RING-between-RING E3 ligase, to the impaired mitochondria [169] . In turn, Parkin-induced mitophagy is strictly dependent on depolarization-induced accumulation [167, 169] . Chen et al. previously reported that PINK1-phosphorylated Mfn2 as a receptor-mediated Parkin recruitment to the damaged mitochondria [166] . Meanwhile, the phosphorylated-ubiquitin on ser 65 (p-Ser65-Ub) chains is also identified as a potent Parkin activator and receptor, which leads to the onset of mitophagy [170] [171] [172] . Notably, Lazarou et al. recently reported that PINK1 recruits NDP52 and OPTN cargo receptors, but not p62/SQSTM1, to directly activate mitophagy, independently of Parkin [173] . Similarly, to protect against ischemic brain injury, BNIP3L/NIX-mediated mitophagy is independent of Parkin [19, 20] . Furthermore, upon mitophagy induction, AMBRA1 binds to the autophagosome adapter LC3 to initiate a powerful mitophagy, promoting canonical Parkin PINK1-dependent and Parkin-independent mitochondrial clearance [13] . All the aforementioned data show that Parkin may act as an amplifier, which is not indispensable for mitophagy. Intriguingly, the protein kinase PINK1 and Parkin are also involved in the generation of MDVs [174] .
Peroxisome homeostasis is regulated by division and pexophagic degradation. The degradation of the aberrant peroxisomes by selective autophagy is known as pexophagy [33, 114, 175] (Figure 3) . Four different types of pexophagy are identified in mammalian cells [115] , including ubiquitin-mediated pexophagy [176] , NBR1-induced pexophagy [177] , PEX3-induced pexophagy [178] , and PEX14-LC3 interaction-mediated pexophagy [179] . Compared with pexophagy in yeast [114] , the underlying mechanisms of pexophagy in mammalian cells are less elucidated. The ubiquitination of mammalian PEX5 [180, 181] and PMP70 [181] has been identified in pexophagy. In response to reaction oxygen species, ataxia-telangiectasia-mutated kinase phosphorylates PEX5 and eventually links the peroxisome with the adaptor p62/SQSTM1 for pexophagy [180] . During amino acid starvation, the peroxisomal E3 ubiquitin ligase PEX-2 ubiquitinates downstream PEX5 and PMP70 and subsequently degrades peroxisome in an NBR1-dependent manner [181] .
Under various stresses, complex signaling pathways are involved in the activation and regulation of mitophagy and pexophagy. The detailed mechanism needs to be further elucidated in future research.
The RLR-dependent type I interferon responses are regulated by MAVSs, which are initially thought to be only located in the OMM of the mitochondria [182] . Upon viral infection, MAVSs bind to RLRs and promote the activation of downstream signal transduction [93, 94] . Khan et al. reported that HCV attenuated the innate immunity via Parkin-dependent recruitment linear ubiquitin assembly complex to the mitochondria [183] . Similarly, Edmonston strain (MV-Edm), an attenuated MV, triggered MAVS degradation via p62/SQSTM1-mediated mitophagy to weaken the RLR signal [66] . The matrix protein (M) of HPIV3 [184] and PB1-F2 of IAV [185] induce the mitophagy Parkin-PINK1-independent pathway to suppress innate immunity. More importantly, Dixit et al. recently identified that partial MAVSs are also located on the peroxisome for antiviral signal transduction [159] . Upon viral infection, the mitochondria and peroxisomes are not just simple metabolic organelles, but rather serve as a critical subcellular platform for antiviral immunity, which expands our understanding about the integration of antiviral networks of the intracellular organelles. Mitochondrial MAVSs may mediate the proapoptotic signaling of innate immune activity against viral infections [186, 187] . Previous reports suggested that HCV proteolytically cleaved the MAVS from the mitochondria by NS3/4A [188] [189] [190] at cysteine 508 amino residue rather than degrading the MAVS to cripple innate immunity [191] . Horner et al. further identified that NS3/4A of HCV targeted the mitochondrial-associated membrane (MAM) and cleaved MAVSs from the MAM, but not from the mitochondria to ablate the immune actions of the MAVS signalosome during HCV infection [192] . Taken together, various viruses have evolved a plethora of strategies to exploit mitophagy and pexophagy to suppress MAVS-dependent RLR signaling to maximize their own replication.
Different viruses have exploited different strategies to utilize the UPR of the ER for viral replication (Table 1 ). In our laboratory, we found that NDV activated all three branches of the UPR of the ER to facilitate NDV replication [105] . Synergistic expression of HN and F of virulent NDV is necessary for the UPR activation of the ER [88] . However, the exact mechanism of how a virus leads to the accumulation of unfolded proteins in the lumen and utilizes the UPR of the ER needs to be further investigated. The selective degradation of ER is termed ER-phagy [193, 194] . Upon the stimulation of an ER stress inducer, the signaling networks of ATF6, PERK, and IRE1α, and cellular Ca 2+ , are necessary for the activation of ER-phagy at different stages, including induction, vesicle nucleation, and elongation [194] ( Figure 3 ). PERK and IRE1α branches of ER stress are indispensable for DENV-induced autophagy [98] . To date, FAM134 [25] , BNIP3/Nix [21] , and p62/SQSTM1 [12] have been identified as cargo receptors of ER-phagy. Additionally, Tomar et al. [195] reported that TRIM13, an ER-resident ubiquitin E3 ligase, regulates the ER-phagy process and ER stress. Considering the important role of the ER-localized STING in anti-viral innate immunity [93, 95] , we speculate that ER-phagy may be involved in the inhibition of the cGAS-STING pathway during virus infections, particularly DNA virus infection ( Figure 3 ). The precise underlying mechanisms of ER-phagy upon viral infection need to be further investigated.
The selective autophagy is initiated by isolation membranes. Subsequently, the isolation membranes are close to one another to form double membrane bound autophagosomes, and eventually fuse with lysosomes for degradation [6, 34] . Notably, the autophagosome does not have the ability to degrade its contents. Only after fusion with the lysosome, which provides an acidic environment and hydrolases, can the autophagosome degrade the autophagosomal contents. Numerous inducers can trigger lysosomal membrane permeabilization and the consequent leakage of the lysosomal content into the cytosol, which eventually leads to so-called "lysosomal cell death" [152] . The removal of damaged lysosomes by selected autophagy is termed lysophagy [33, 34] (Figure 3 ). Moreover, nucleophagy is a selective autophagy, which selectively removes damaged or non-essential material by the autophagy pathway [113, 196] (Figure 3 ). Recently, Unterholznoer et al. have identified IFI16, a PYHIN protein, as a new DNA sensor [197] . Considering the nuclear distribution of IFI16, we speculate that nucleophagy may involve the IFI16-dependent innate immunity ( Figure 3 ). Compared with mitophagy and pexophagy, many questions regarding the molecular details of lysophagy and nucleophagy pathways should be further elucidated. One of the interesting questions is how the nucleus and lysosome are sequestered by the phagophore and recognized by the cargo adaptor in response to viral infections.
In the current review, we present a brief overview of the quality control strategies of intracellular organelles in mammalian cells in response to viral infection. Although distinct steps of the viral life cycle have long been known to associate with the abundant membrane rearrangement of intracellular organelles [4] , a detailed understanding of the interplay of virus and host, particularly the interaction between individual viral protein and organelle component, has remained unclear. Several important scientific questions remain unelucidated. First, what is the mechanism to coerce the host translational machinery into synthesizing viral proteins in the face of ongoing infectious progeny? Second, how do viral and cellular proteins contribute to the re-construction of viral replication factories in different subcellular membranes sites? Third, what are the viral proteins and cellular factors that are required for membrane remodeling and that metabolize reprogramming in virus-infected cells?
Moreover, although we have made great progress in the understanding of selective autophagy, the assembly site of a double membrane has remained unclear. The assembly of the phagophore may require various membranes, including the ER [198] , ER-Golgi intermediate compartments (ERGIC) [199] , ER-mitochondria junctions [200] , mitochondria [201] , mitochondrial-derived vesicles (MDVs) [202] , Golgi-endosomal membranes [203] , and the plasma membrane [204] . Upon DNA or RNA viral infection, further work is needed to decipher the exact phagophore assembly site of selection autophagy during viral infections. More importantly it remains unclear how the host cell initiates the "eat-me" signal for the elimination and the elongation of the phagophore membrane around targeted organelles. Every selective autophagy pathway requires a specific cargo receptor, which bridges ubiquitinated organelles to LC3/Atg8 family membranes to link with the autophagy machinery. In mammalian cells, several cargo proteins, including p62/SQSTM1, CALCOCO2/NDP52, NBR1, optineurin/OPTN, AMBRA1, BNIP3L/NIX, BNIP3, FUNDC1, TAXIBP1, cardiolipin, prohibitin-2/PHB-2, and FAM134B (Table 3) , have been identified; however, the exact processes of recognition and specific selection of damaged organelles for degradation by selective autophagosomes during viral infection remain poorly understood. Notably, Lazarou et al. identified that NDP52 and OPTN are the primary receptors for PINK1-dependent mitophagy, independent of Parkin. PINK1-generated phospho-ubiquitin directly serves as the "eat-me" signal on the mitochondria [173] , which extends our understanding of classical PINK1-Parkin mitophagy signaling. Furthermore, Tank-binding kinase 1 is involved in the phosphorylation of cargo receptors, including OPTN and NDP52, to create an "eat-me" signal at the autophagy-relevant site [15] . The difference between the autophagy receptor NDP52 and p62 determines the species-specific impact of the autophagy machinery during CHIKV infection, indicating that a receptor may regulate viral infection in a species-dependent manner [205] . Recently, cardiolipin of the inner mitochondrial membrane phospholipid was found to serve as an "eat-me" signal for mitochondrial clearance from neuronal cells [24] . Meanwhile, Wei et al. [22] recently identified the prohibitin-2/PHB-2 as a specific mitophagy receptor of IMM for autophagic degradation. Interestingly, the matrix protein of HPIV [184] and the PB1-F2 protein of IAV [185] can be a receptor in the induction of mitophagy. Specific selection of cargo proteins during damaged organelle degradation may be primarily dependent on the targeted organelle and viral characteristic.
Extensive improvement of our understanding of the cross-talk between viruses and organelles must depend on the innovative applications of new techniques and materials [206] , such as the single-particle tracking method, the ribopuromycylation method, single cell RNA-seq, three-dimensional (3D)-reconstruction of electron microscopy, image-based genome-wide RNA interference screens, haploid genetic screens, yeast two-hybrid screens, a modern ultra-structural technique, and, in particular, a high-throughput and genome-scale CRISPR-Cas screening technique. Electron tomography and 3D imaging technology are being successfully applied to the study of virus-cell interactions, such as for EAV [84] , RUBV [56] , and BUNV [57] . Similarly, Ertel et al. recently revealed new interior and exterior features of RNA replication compartments of the non-human flock house nodavirus via a cryo-electron tomography technique [144] . Based on image-based genome-wide siRNA screening, Orvedahi et al. identified that SMAD specific E3 ubiquitin protein ligase 1 (SMURF1), as a newly recognized mediator, functions in mitophagy triggered by viral infection but not in starvation-induced autophagy [207] . Researchers should keep a closer eye on advanced technological breakthroughs and combine these advanced technological breakthroughs into a comprehensive understanding of virus-organelle interaction. The discovery of the underlying virus-host molecular mechanism already has an overlapping function to multiple viruses, which advance the discovery of host druggable targets and development of broad-spectrum antiviral approaches.
|
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes coronavirus disease 2019 , an infection for which no specific vaccine is currently available (2, 3) . T cells are reported to be pivotal in mounting a successful immune response against COVID-19 as recovered individuals exhibit SARS-CoV-2 specific T cell memory and T cell dysfunction, and imbalance has been reported as a hallmark of severe COVID-19 (4, 5) . Both CD4+ and CD8+ T cells have been implicated in COVID-19 with CD4+ T cells being broadly Th1-like by the secretion of cytokines interleukin-2 (IL-2), interferon gamma (IFN-) and tumour necrosis factor (TNF), and CD8+ T cells also secreting TNF and IFN- as well as effecting direct target cell lysis through the secretion of perforin and granzymes (6) . Cross-reactive T cells between other human coronaviruses and SARS-CoV-2 have been identified, suggesting the potential role for T cell cross-protection in COVID-19 (7, 8) . Here we investigated whether cross-reactive SARS-CoV-2-specific T cells can arise from Bacillus Calmette-Guérin (BCG)-derived peptide sensitization.
BCG vaccine containing live attenuated Mycobacterium bovis, hereafter referred to as BCG, typically vaccinates against tuberculosis (TB). It can also induce cross-protection against pathogens unrelated to TB. The cross-protective effects have shown to reduce all-cause mortality in children and respiratory tract infections in adults (9) (10) (11) (12) . One mechanism of cross-protection is through BCG epigenetically modifying innate immune cells in the form of trained innate immunity lasting up to one year (13, 14) . The heterologous effect of BCG vaccination on T cells has been demonstrated in other viral infections such as murine vaccinia virus and HPV papillomatosis (15) (16) (17) (18) .
Given the heterologous effects of BCG vaccination, more than 15 clinical trials are currently underway globally to test the cross-protective effect of BCG in COVID-19, most notably the BRACE study involving 10,000 healthcare workers in Australia and the Netherlands (1). Although reports from these prospective trials are still forthcoming, large country-level epidemiological analyses have shown a negative correlation between BCG vaccination status of a country and COVID-19 disease severity or case growth (19) (20) (21) .
Here we show that the observed benefits of BCG vaccination in the context of COVID-19 can be attributed, in part, to T cell cross-reactivity.
T cells specific for SARS-CoV-2 are being increasingly characterised and recognised as pivotal in mounting a successful immune response to COVID-19 (6) . To study the extent that BCG-primed T cells could cross-react with SARS-CoV-2 epitopes and promote viral clearance, we first performed NCBI Protein Blast searches against the SARS-CoV-2 proteome, restricting results to BCG proteins.
Regions of protein sequence homology were identified between BCG sequences and the nonstructural proteins NSP3 and NSP13 located in ORF1ab of SARS-CoV-2 ( Fig. 1 and table S1 ). When processed as 15mers for MHCII presentation, these regions exhibit up to 60% identity and 73.3% similarity between BCG and SARS-CoV-2 (table S1). Percent identity and similarity of constituent 9mers for MHCI presentation are up to 88.8% and 100%, respectively, permitting cross-reactive CD4+ and CD8+ T cell responses.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
Amino acid sequence alignment of the peptide pairs (PP) of BCG (top sequence) and SARS-CoV-2 (bottom sequence) used in this study. Red coloured amino acid -identity. Yellow coloured amino acid -similarity. Grey coloured amino acid -no identity or similarity.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint NSP3 is a papain-like proteinase that shares a macro domain with the BCG proteins: macrodomain-containing protein and UPF0189 protein. This macro-domain-containing protein is conserved among the Mycobacterium tuberculosis complex including BCG (accession number WP_003909539.1). NSP13 is a helicase that shares homology with BCG proteins RecB nuclease and zinc-metalloprotease-FtsH. Both RecB nuclease and zinc-metalloprotease-FtsH contain a walker-A-motif sequence that is identical in NSP13 of SARS-CoV-2. Additionally, RecB nuclease contains two other regions of homology with NSP13 around amino acid residues 952-966 and 1093-1107. As has been previously reported, NSP13 is highly conserved between other human coronaviruses. Thus, the T cell cross-protective potential of BCG holds not only for SARS-CoV-2 as we have shown but potentially with other human coronaviruses that cause the common cold (229E, NL63, OC43 and HKU1) and the more serious human coronaviruses SARS-CoV and Middle East respiratory syndrome coronavirus (MERS-CoV). NSP3 is, however, not as widely conserved among coronaviruses (7, 22) . In order for cross-reactivity to occur between T cells that share epitope homology, a significant degree of homology must also be paired with the capacity of an immunogenic peptide to bind cognate MHC class I or II. Indeed, HLA binding has been reported as important in COVID-19 severity. Patients with mild COVID-19 presented MHCI molecules with a higher theoretical affinity than those with moderate to severe COVID-19 (23) . To assess the capacity of BCG epitopes to bind HLA alleles that broadly cover the global population, we performed in silico prediction analyses of peptide-MHC binding affinity using NetMHCIIpan 4.0 and NetMHCpan 4.1 across each region of homology as 9mers or 15mers overlapping by 1 amino acid residue in MHCI and MHCII binding, respectively (24) . HLA alleles in the analysis were selected based on previously reported reference . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint sets giving maximal global population coverage (25, 26) . We found that the BCG derived peptides with homologous sequences to SARS-CoV-2 peptides exhibited broad MHC class II and MHC I binding capacity (Figs S1 and S2).
To determine the cross-reactive immunogenicity of these BCG derived peptides across diverse HLA-types, we selected 10 healthy HLA-typed blood donors with different HLA types (Table S2b) .
Based on IEDB population coverage, our collection of HLA-typed individuals gave a global MHC Class I and II coverage of 97.21% and 99.97%, respectively (27) . In addition, binding affinity predictions of the homologous peptides to HLA alleles from the 10 HLA-typed donors used in this study were analysed (Figs. S1 & S2). Based on homology and strong binding, a selection of eight different 15mer peptide pairs (PP1-8) were chosen for subsequent experimentation on human donors (Fig. 1) . To determine if HLA-typing was necessary, we also tested the cross-reactive immunogenicity of the BCG derived peptides on 10 non-HLA-typed persons.
To determine if priming with BCG peptide enhances T cell responses to SARS-CoV-2 peptides, we compared CD4+ and CD8+ T cell responses to SARS-CoV2 peptides using cells that were either primed with a control peptide (invariant chain peptide, CLIP) or BCG peptide. CD3+ T cells were isolated from donors (n=20, table S2a) and co-cultured with dendritic cells (DCs) in vitro (Fig. 3) .
Individual BCG peptides were first used to sensitize and expand the BCG-specific T cells, simulating a BCG vaccination. T cells were then rested for two days without antigen stimulation then restimulated with the SARS-CoV-2 peptides. To measure T cell responses, we performed . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint intracellular cytokine staining (ICS) for IFN-, TNF, IL-2, perforin; surface staining for the early T cell activation marker CD69, and a two-colour proliferation assay to differentiate between a primary proliferative and secondary proliferative response using a combination of both Cell Trace Yellow (CTY) and Cell Trace Violet (CTV). A positive response was defined as an increase compared to control.
All individuals (n=20) exhibited a positive response to at least 7 out of 8 SARS-CoV-2 peptides (Fig. 2 ). The enhanced positive cross-reactive response confirms the prediction of high HLA binding affinity and we confirm these cross-reactive peptides are immunogenic as they elicit CD4+ Th1like responses and robust CD8+ responses.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
Heat map of individuals representing global HLA coverage shows improved SARS-CoV-2 T cell responses when stimulated with SARS-CoV-2 peptide. Individual donor T cell responses to the 8 peptide pairs (PP1-PP8) across 11 parameters (i-xi) determined by flow cytometry. i -CD8+ IFN-, ii -CD8+ TNF, iii -CD8+ IL-2, iv -CD8+ CD69, v -CD8+ Perforin, vi -CD8+ proliferation, vii -CD4+ IFN-, viii -CD4+ TNF, ix -CD4+ IL-2, x -CD4+ CD69, xi -CD4+ proliferation. A responder (red) is defined as showing a positive response after subtraction of the control primed response to SARS-CoV-2. A non-responder in white is defined as showing no positive staining after subtraction of the control response. Grey -data not available. Individuals were grouped by known or unknown HLA-type highlighting similar patterns between the two groups.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint Next, we assessed the degree of SARS-CoV-2 T cell reactivity enhancement conferred by BCG priming compared to control primed T cells ( Fig. 3 and Figs. S5 & S6). In CD8+ cytotoxic T cells, IFN-, TNF and IL-2 cytokine production across all 8 peptide pairs significantly increased ( Fig. 3 and . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
BCG-peptide primed CD3+ T cells were restimulated with SARS-CoV-2-peptide-pulsed dendritic cells for 6 hr and analysed by intracellular cytokine staining. Unshaded bars -Control primed using irrelevant peptide CLIP 103-117 then SARS-CoV-2-peptide 1-8 restimulated. Shaded bars -BCG peptide 1-8 primed then SARS-CoV-2-peptide homologue restimulated. a) Brief timeline of the culture, b) CD8+ IFN-+ responses (n=9-12), c) CD4+ IFN-+ responses (n=5-13), d) CD8+ TNF+ responses (n=4-14), e) CD4+ TNF+ responses (n=6-16), f) Representative TNF (x-axis) and IFN- (yaxis) dot plots of a responder donor with their corresponding SARS-CoV-2 primary response control. *P < 0.05, **P < 0.01, ***P < 0.001 by Wilcoxon matched-pairs signed rank test of responder samples, comparing the magnitude of response to SARS-CoV-2 peptides with or without BCG peptide priming.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint Patterns of cytokine production were varied between individuals and peptide pairs which is reflected in the complex pattern of T cell cytokine expression and phenotypes that BCG vaccination is known to produce (28). Indeed, the IFN- and TNF response was mixed with some individuals making only IFN- or TNF in response to a particular peptide pair and some being positive for both (Fig. S8 ). This observation is concordant with previously reported responses in COVID-19 (6, 7).
Since the COVID-19 CD8+ response involves the secretion of perforin and granzymes for an effective antiviral response, we measured perforin expression by ICS. We found that CD8+ T cells primed with BCG-derived peptides had an enhanced perforin expression upon SARS-CoV-2 restimulation when compared to control primed cells (Fig. S5b ). Cross-reactive perforin expression in responders was significantly increased across all 8 peptide pairs with a mean foldincrease ranging from 1.9-fold (PP1) to 47.2-fold (PP4). Thus, cross-reactive CD8+ T cells can effect an antiviral response by target cell lysis.
In order to mount an effective T cell response to COVID-19, antigen-specific T cells need to become activated and undergo clonal expansion. To assess whether T cells pre-stimulated with BCG-derived peptides exhibit enhanced T cell activation when restimulated with SARS-CoV-2 homologues, expression of early T cell activation marker CD69 was assessed by flow cytometry.
We show that when compared to a SARS-CoV-2 primary response, the BCG primed T cells increased CD69 expression across all 8 peptide pairs (Figs. 5c, 6c). CD69 expression in responders showed a mean fold-increase ranging from 3.2-fold (PP1) to 29.6-fold (PP5) for CD4+ cells and . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint from 1.7-fold (PP1) to 10.5-fold (PP2) for CD8+ cells. These cross-reactive T cells that show increased activation when primed with BCG peptides and restimulated with SARS-CoV-2 homologues, are able to proliferate and produce superior effector functions than those not presensitized with BCG peptides. This may be of importance in swift and effective viral clearance of SARS-CoV-2 in COVID-19 patients.
To assess whether T cells primed with BCG derived peptides show enhanced T cell proliferation upon SARS-CoV-2 peptide restimulation, cell proliferation dye CTY was used to assess the proliferation after BCG priming followed by CTV to assess the proliferation after SARS-CoV-2 restimulation. All donor samples primed with a BCG peptide developed enhanced T cell proliferation to at least 3 out of the 8 SARS-CoV-2 peptides tested (Fig. 2) . The magnitude of the enhanced proliferative response was also assessed in BCG-primed individuals who responded SARS-CoV-2 restimulation. Specifically, we compared the SARS-CoV-2 peptide induced proliferation in cells that were first sensitized with BCG peptide or with control peptide. In all of the tested peptide pairs (PP1-PP8) and across both CD4+ and CD8+ T cells, BCG peptide sensitized cells developed significantly enhanced proliferation to its SARS-CoV-2 homologous peptide ( Fig. 4 ). In the responders, T cell proliferation was enhanced in CD8+ T cells between 19% (PP3 and PP5) to 51% (PP6) and in CD4+ T cells by 11% (PP5) to 39% (PP8). Therefore, we show that BCG peptides have the ability to cross-protect against SARS-CoV-2 by T cell activation and heightened T cell proliferation.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint Figure 4| BCG priming enhances CD4+ and CD8+ T cell proliferation. a) Brief culture timeline. b) CD8+ restimulation proliferation response is enhanced by BCG priming (n=9-12). c) CD4+ restimulation proliferation response is enhanced by BCG priming (n=6-11). Unshaded bars -Control primed using irrelevant peptide CLIP103-117 then SARS-CoV-2-peptide restimulated. Shaded bars -BCG peptide primed then SARS-CoV-2-peptide homologue restimulated. d) Representative CTV versus CTY dot plots of CD4+ and CD8+ cultured cells indicating proliferation. Top right quadrant gate of CTY hi CTV hi cells did not proliferate upon priming or restimulation. Top left quadrant gate of CTY lo CTV hi cells did proliferate upon priming but not with restimulation. Bottom left quadrant gate of CTY lo CTV hi cells proliferated upon both priming and restimulation. *P < 0.05, **P < 0.01, ***P < 0.001 by Wilcoxon matched-pairs signed rank test of responder samples, comparing the magnitude of response to SARS-CoV-2 peptides with or without BCG peptide priming.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
To confirm and establish the T memory cell phenotype of proliferated T cells after BCG stimulation, the proportion of T effector memory (Tem), T central memory (Tcm) and T effector memory re-expressing CD45RA (TEMRA) cells were determined based on CD45RA and CCR7 expression patterns on the proliferated CD4+ or CD8+ cells (Fig. S7 ). Across all 8 tested BCG peptides, greater than 99% of proliferated BCG-stimulated cells exhibited Tcm, Tem or TEMRA memory phenotype at day 16, the rest being naïve phenotype (CD45RA+, CCR7+). Of these memory cells, Tem was the predominant phenotype of CD4+ and CD8+ cells. CD4+ cells exhibited a minor subpopulation of Tcm and few TEMRA and CD8+ cells exhibited a minor subpopulation of TEMRA and few Tcm. Therefore, T memory phenotypes predominate in BCG-stimulated T cells providing an explanation for their potential to heighten recall responses.
Although the self-reported BCG vaccination status of our donors was known (n=10), we found no significant difference in responses from BCG vaccinated individuals compared to unvaccinated individuals (data not shown). The co-culture assay was not designed to test the direct ex vivo recall response of prior BCG vaccination but rather to simulate vaccination in vitro by pre-stimulating with BCG-derived peptides. We analysed an equal number of males and females in this study (n=10 each) and no significant sex-specific differences were found in the parameters measured (data not shown).
Collectively, our results demonstrate that CD4+ and CD8+ T cells specific for BCG derived peptides are cross-reactive with SARS-CoV-2 peptides. These data provide a mechanistic explanation for . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint the observed negative epidemiological associations between BCG vaccinations and COVID-19 severity and mortality and support the continuation of clinical trials around the world, particularly in people at high-risk of contracting SARS-CoV-2.
The study was conducted according to the Declaration of Helsinki and approved by Monash University Human Research Ethics Committee project ID 25834. All donors provided written informed consent.
No statistical methods were used to pre-determine sample size. The experiments were not randomised. The investigators were not blinded to allocation during experiments and assessment of outcomes.
Seven donors underwent high resolution class I and II molecular sequence-based typing performed by the Australian Red Cross Victorian Transplantation and Immunogenetics Service by next-generation sequencing. Three donors underwent low-resolution HLA-DR typing at the same provider. HLA typing results are contained within table S2b.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Global allele coverage of HLA-typed donors was assessed at IEDB Analysis Resource -Population Coverage (27) . NetMHCpan-4.1 and NetMHCIIpan-4.0 were used to predict binding affinity of homologous peptides to a globally representative collection of MHCI or MHCII alleles plus the alleles of our HLA-typed donors using artificial neural networks (24) (25) (26) . For each region of homology, 9mers for MHCI and 15mers for MHCII overlapping by 1 amino acid underwent affinity analysis. Affinity rank was generated that normalizes prediction score by comparing to prediction of a set of random peptides. An affinity rank score of < 2 was called a strong binder. An affinity rank score of ≥ 2 and 10 was called a binder. An affinity rank score of > 10 was called a nonbinder.
Protein BLAST search of the SARS-CoV-2 proteome (sequence ID NC_045512.2) restricted to Mycobacterium bovis (BCG) was performed using the NCBI blastp suite (https://blast.ncbi.nlm.nih.gov/Blast.cgi).
Protein sequences from SARS-CoV-2 NSP3
(YP_009725299.1), SARS CoV-2 NSP13 (YP_009725308.1), BCG RecB nuclease (KAF3412556.1), . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
15mer peptides were synthesised with an N-terminal free amine (H-) and a free acid group at the C-terminus (-OH). Peptides were ≥ 90% pure as assessed by reversed-phase high-performance liquid chromatography (RP-HPLC) (Mimotopes). Peptide sequences used in this study can be found in table S1 and control peptide CLIP103-117 (PVSKMRMATPLLMQA). Lyophilized peptide was reconstituted in sterile MilliQ water with 5% (v/v) DMSO (Sigma). Final concentration of peptides used in culture was 10g/mL and final concentration of DMSO in the cultures was 0.005% (v/v).
Human PBMCs were freshly isolated from whole donor blood in K2EDTA anticoagulant Vacutainers (BD) using Lymphoprep density gradient medium (Stemcell) and SepMate tubes (Stemcell). PBMCs were enumerated in a haemocytometer with trypan blue 0.4% (Sigma) and the CD14+ CD16-monocytes were then magnetically separated using EasySep Human Monocyte
isolated monocytes were then enumerated in a haemocytometer with 0.4% trypan blue and differentiation culture was established to differentiate the monocytes into dendritic cells using ImmunoCult Dendritic Cell Culture Kit following instructions of the manufacturer (Stemcell).
According to the protocol (Stemcell), immature DCs used in the ICS co-culture did not receive . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint maturation supplement on day 5 of culture and mature DCs used in the proliferation and memory co-culture received maturation supplement on day 5 of culture. After 7 days culture, immature DCs were used for the ICS co-culture and mature DCs were used for the proliferation co-culture.
Human CD3+ T cells were isolated from fresh whole donor blood in K2EDTA tubes using RosetteSep HLA T Cell Enrichment Cocktail according to instructions of the manufacturer (Stemcell). Isolated CD3+ T cells were enumerated in a haemocytometer with 0.4% trypan blue (Sigma). CD3+ T cells were then used in the ICS and proliferation co-cultures.
ICS co-culture was initiated with 100,000 freshly isolated human CD3+ T cells, 10,000 human immature monocyte-derived DCs and 10ug/mL of BCG peptide from PP1-8 ( Fig. 1) or control peptide CLIP103-117 in a 96 well round-bottom plate (Corning) at 100uL per well of complete RPMI (Gibco) supplemented with 10% autologous human serum, 100 U/mL penicillin and 0.1 mg/mL streptomycin (Gibco), 2mM L-glutamine (Gibco) and 50M 2-mercaptoethanol (Sigma). Positive assay control received anti-human CD2, anti-human CD3, and anti-human CD28 coated MACS iBeads at a ratio of 1 bead:2cells prepared from the human T cell activation/expansion kit as per the manufacturer's instructions (Miltenyi). Negative assay control received no peptides. Coculture was incubated at 37C in a CO2 incubator (Binder). Five days later, the co-culture was supplemented with 40IU/mL recombinant human IL-2 (Stemcell) and reincubated. On day 7 of coculture, cells were rested by washing twice in 250uL PBS to remove peptides and resuspended in 100uL fresh complete RPMI formulated as above with no peptides and reincubated. On day 9 of co-culture, cells were restimulated by washing twice with 250L PBS then 10,000 freshly-cultured, . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint immature DCs were added per well with 10g/mL of SARS-CoV-2 peptide from PP1-8 ( Fig. 1) or control peptide CLIP103-117 and 1g/mL anti-human CD28 monoclonal antibody (clone CD28.2, eBioscience) in serum-free RPMI. Positive assay control received anti-human CD2, anti-human CD3, and anti-human CD28 coated MACS iBeads at a ratio of 1 bead:2cells. Negative assay control received no peptides. To pulse the DCs with peptide, co-culture was incubated for 2 hours at 37C in a CO2 incubator (Binder). After 2 hours, media was adjusted to contain 10% autologous serum and 1X protein transport inhibitor cocktail containing brefeldin A and monensin (eBioscience) was added and reincubated. After 6 hours at 37C in a CO2 incubator, cells were harvested for flow cytometric analysis by ICS. The entire culture system was setup to be autologous.
Proliferation co-culture was initiated with 100,000 freshly isolated CD3+ T cells stained with cell proliferation dye Cell Trace Yellow according to the manufacturer (Invitrogen), 10,000 human mature DCs and 10g/mL of BCG peptide from PP1-8 ( Fig. 1) or control peptide CLIP103-117 in a 96 well round-bottom plate (Corning) at 100uL per well of complete RPMI (Gibco) supplemented with 10% autologous human serum, 100 U/mL penicillin and 0.1 mg/mL streptomycin (Gibco), 2mM L-glutamine (Gibco) and 50M 2-mercaptoethanol (Sigma). Positive assay control received anti-human CD2, anti-human CD3, and anti-human CD28 coated MACS iBeads at a ratio of 1 bead:2cells prepared from the human T cell activation/expansion kit as per the manufacturer's instructions (Miltenyi). Negative assay control received no peptides. Co-culture was incubated at 37C in a CO2 incubator (Binder). Seven days later, cells were washed twice in 250uL PBS to remove peptides and resuspended in 100uL complete RPMI formulated as above with no peptides and reincubated. On day 9 of co-culture, cells were washed twice in 250uL PBS and stained with . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint Cell Trace Violet cell proliferation dye according to the manufacturer's instructions (Invitrogen).
Then 10,000 freshly-cultured, human mature DCs were added per well with 10g/mL of SARS-CoV-2 peptide from PP1-8 (Fig. 1) or control peptide CLIP103-117. Positive assay control received anti-human CD2, anti-human CD3, and anti-human CD28 coated MACS iBeads at a ratio of 1 bead:2cells. Negative assay control received no peptides. Co-culture was incubated at 37C in a CO2 incubator for 7 days then harvested for flow cytometric analysis. The entire culture system was setup to be autologous.
After culturing, cells were stained with Live/Dead Fixable Near Infra-Red Dead Cell Stain Kit Individuals that responded in the given parameters to SARS-CoV-2 after BCG priming (Fig. 2) were . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint showed a proliferation response when BCG primed, SARS-CoV-2 restimulated (Fig. 2) were defined as showing positive staining after subtraction of the primary SARS-CoV-2 response control (CLIP103-117 primed, SARS-CoV-2 peptide stimulated). A non-responder was defined as showing no positive staining after subtraction of the primary SARS-CoV-2 response control. For statistical analysis (Fig. 4) , the responders were selected as those with positive staining after subtraction of the primary SARS-CoV-2 response control or restimulation background control (BCG primed, irrelevant peptide CLIP restimulated). Proliferation in response to BCG priming and SARS-CoV-2 restimulation was calculated as the proportion of CD4+ or CD8+ cells that underwent proliferation post-priming and post-restimulation (CTY lo CTV lo ) of total proliferated cells (CTY lo CTV lo and CTY lo CTV high ).
Flow cytometry data was exported from FlowJo 10.6.2 (BD) and analysed using R Studio version 1.3.959 before being analysed in GraphPad Prism 7 (Graphpad Software Inc.). A Shapiro Wilk test was used to determine normality followed by a two-tailed, Wilcoxon matched-pairs signed rank test to compare the responses of BCG primed with control primed samples from responders.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
Correspondence and requests for materials should be addressed to J.D.O.
A . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
Supplementary figures and tables:
Affinity rank score of BCG 15mers overlapping by 1 amino acid across the region of shared homology between BCG and SARS-CoV-2. Hotspots of high affinity overlap broadly with high regions of homology. Red gradient; strong peptide-MHC binder (affinity rank ≤ 2). Yellow gradient; peptide-MHC binder (affinity rank > 2 and ≤ 10). Grey gradient; non-binder (affinity rank < 10). Y-axis; number indicates the amino acid sequence start number of the respective 15mer. Red number indicates the 15mers analysed in this study. X axis; MHC class II alleles grouped into HLA-DR (n=20), HLA-DQ (n=22) and HLA-DP (n=15) isotype. The selected alleles are globally representative and include all alleles from HLA-typed donors used in this study. Pink gradient; Pairwise percent sequence similarity between BCG and SARS-CoV-2 15mers. a) BCG RecB nuclease (RBN) affinity rank binding scores. b) BCG UPF0189 protein (UPF) affinity rank binding scores. c) BCG macro domain containing protein (MDCP) affinity rank binding scores. d) BCG zinc metalloprotease FtsH (ZMP) affinity rank binding scores.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
Affinity rank score of BCG 9mers overlapping by 1 amino acid across the region of shared homology between BCG and SARS-CoV-2. Red gradient; strong peptide-MHC binder (affinity rank . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint ≤ 2). Yellow gradient; peptide-MHC binder (affinity rank > 2 and ≤ 10). Grey gradient; non-binder (affinity rank < 10). Y-axis; number indicates the amino acid sequence start number of the respective 9mer. Red number indicates the overlapping 9mers contained within the 15mers analysed in this study. X-axis; MHC class I alleles grouped into HLA-A (n=16), HLA-B (n=14) and HLA-C (n=2) isotype. These alleles selected are globally representative and include the alleles from HLA-typed donors used in this study. Pink gradient; Pairwise percent sequence similarity between BCG and SARS-CoV-2 9mers. a) BCG RecB nuclease (RBN) affinity rank binding scores. b) BCG macro domain containing protein (MDCP) affinity rank binding scores. c) BCG UPF0189 protein (UPF) affinity rank binding scores. d) BCG zinc metalloprotease FtsH (ZMP) affinity rank binding scores.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
Amino acid sequence alignment of the peptide pairs (PP) of BCG and SARS-CoV-2 used in this study including NCBI accession number, similarity, identity and BLOSUM62 matrix score. | -an identical amino acid match, : -a similar amino acid match, . -no match. RBN -RecB nuclease, MDCP -macro domain containing protein, UPF -UPF0189 protein, ZMP -zinc metalloprotease FtsH.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint Table S2a| Donor characteristics Table S2b| Donor HLA alleles used in this study.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint Figure S4 : Flow Cytometry Gating Strategy for T cell proliferation and memory T cell proliferation and memory gating strategy. a) Forward scatter area (FSC-A) versus side scatter area (SSC-A) density plot gating the lymphocyte population. b) Side scatter height (SSC-H) versus SSC-A density plot gating the single cell population. c) Live dead discrimination dye (LD Near IR) versus SSC-A density plot gating the alive cell population. d) CD3 versus SSC-A density plot gating the CD3+ T cells. e) The CD3 + cells are separated into CD4 + and CD8+ T cells. f) The proliferation dyes Cell Trace Yellow (CTY) versus Cell Trace Violet (CTV) density plots are quadrant gated based on proliferation after priming and restimulation. Q5 -CTY low CTV high T cells that proliferated after priming but not after restimulation. Q6 -CTY high CTV high T cells that did not proliferate after priming or restimulation. Q7 CTY high CTV low T cells that did not proliferate after priming but proliferated after restimulation. Q8 -CTY low CTV low T cells that proliferated both after priming and restimulation. g) The CD45RA versus CCR7 density plots of the CD4 + or CD8 + parent populations are quadrant gated separating the Q1 -CD45RA -CCR7 + (T central memory cells), Q2 -CD45RA + CCR7 + (Naïve T cells), Q3 -CD45RA + CCR7 -(TEMRA cells), and Q4 -CD45RA -CCR7 -(T effector memory cells). All fluorescence based gating except CTY and CTV is determined based on fluorescence minus one (FMO) controls. CTV and CTY gating is based on the point at which the first cell division took place visible by fluorescence dye dilution.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
BCG-peptide 1-8 primed (shaded bars) or control primed using irrelevant peptide CLIP103-117 (unshaded bars) CD3 + T cells were restimulated with SARS-CoV-2-peptide-homologue-pulsed dendritic cells for 6hr and analysed by intracellular cytokine staining. Responders were selected as per methods section for analysis. *P < 0.05, **P < 0.01, ***P < 0.001 by Wilcoxon matchedpairs signed rank test. a) CD8 + IL-2 + responses (n=7-13). b) CD8 + Perforin + responses (n=6-14). c) CD8 + CD69 + responses (n=5-12). d) Representative IL-2, perforin and CD69 dot plots of a responder control-primed (top) and BCG-primed (bottom) donor.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
BCG-peptide 1-8 primed (shaded bars) or control primed using irrelevant peptide CLIP103-117 (unshaded bars) CD3 + T cells were restimulated with SARS-CoV-2-peptide-homologue-pulsed dendritic cells for 6hr and analysed by intracellular cytokine staining. Responders were selected as per methods section. *P < 0.05, **P < 0.01, ***P < 0.001 by Wilcoxon matched-pairs signed rank test. a) CD4 + IL-2 + responses (n=7-13). b) CD4 + CD69 + responses (n=8-13). c) Representative IL-2 and CD69 dot plots of a responder control-primed (top) and BCG-primed (bottom) donor.
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
BCG-peptide-stimulation of T cells produces > 99% T memory phenotype after 16 days co-culture. T effector memory (Tem) is the dominant memory phenotype for both CD4 + and CD8 + T cells. CD8 + T cells exhibit a subpopulation of T effector memory re-expressing CD45RA (TEMRA) with minimal T central memory (Tcm). CD4 + T cells exhibit a subpopulation of Tcm with minimal TEMRA. a) Composition of CD4 + and CD8 + T cell memory subpopulations in BCG UPF0189 protein25-39 (from peptide pair 6) primed T cells, cultured for 16 days; 7 days with peptide and 9 days without peptide. A representative sample of 5 individuals (from n = 20) from 1 peptide pair (from n=8). b) Representative dot plots of CD45RA versus CCR7 expression of the proliferated CD4 + or CD8 + T cells based on Cell Trace Violet dilution. T memory phenotype was characterised as three subpopulations by expression of CD45RA and CCR7. Tem of phenotype CD45RAand CCR7 -. Tcm of phenotype CD45RAand CCR7 + . TEMRA of phenotype CD45RA + CCR7 -. Non-memory T naïve of phenotype CD45RA + CCR7 + .
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
The copyright holder for this this version posted November 23, 2020. ; https://doi.org/10.1101/2020.11.21.20236018 doi: medRxiv preprint
|
December 1, 2019 in Wuhan City, Hubei Province, China (Huang et al, 2020) , the overall confirmed cases in China had reached 78,959 by the end of February 27, 2020, and a total of 2,791 people had died of the disease.
COVID-19 had also spread to other 50 country with the confirmed cases and deaths were 4,696 and 67 respectively, by the end of February 27, 2020. The World Health Organization declared the COVID-19 epidemic as an international public health emergency on January 30, 2020.
To prevent further dissemination of SARS-CoV-2, 31 Provinces in China Mainland had raised their public health response level to the highest state of emergency (level-1) by January 29, 2020. The Chinese government has implemented a series of large-scale public health interventions to control the epidemic, many of which have far exceeded what International Health Regulations required, especially Wuhan lock-down, nationwide traffic restrictions and Stay At Home Movement. Wuhan had prohibited all transport in and out of the city as of 10:00 on January 23, 2020, this is maybe the largest quarantine/movement restriction in human history to prevent infectious disease spread (Tian et al, 2020) . Hundreds of millions Chinese residents, including 9 million Wuhan residents, have to reduce even stop their inter-city travel and intra-city activities due to these strict measures.
Due to Wuhan lock-down, Kucharski et al (2020) estimated that the median daily reproduction number had declined from 2.35 of January 16 to 1.05 of January 30, Tian et al. (2020) estimated that the dispersal of infection to other cities was deferred 2.91 days (CI: 2.54-3.29). However, Read et al (2020) suggested that travel restrictions from and to Wuhan city are unlikely to be effective in halting transmission across China; with a 99% effective reduction in travel, the size of the epidemic outside of Wuhan may only be reduced by 24.9% on February 4. Do these large-scale public health interventions really work well in China? Besides, hundreds of officials were fired or appointed rapidly according to their incompetent or outstanding performances during the epidemic. How to assess the efforts of different regions in China Mainland against COVID-19?
Here we present a simple yet effective model based on Baidu Migration data and the confirmed cases data to quantify the consequences and importance of Wuhan lock-down combined with nationwide traffic restrictions and Stay At Home Movement on the ongoing spread of COVID-19 across China Mainland, and preliminarily assess the efforts of 29 Provinces and 44 prefecture-level cities during the epidemic.
. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not peer-reviewed) The copyright holder for this preprint . https: //doi.org/10.1101 //doi.org/10. /2020 Due to Wuhan lock-down, more than 9 million residents were isolated in Wuhan City since January 23, 2020, only 1.2 million travelers from or to Wuhan during January 24 to February 15, 2020 according to Baidu
Migration. The travelers were down 91.61% and 91.62%, compared to the same period last year (14 million) and the first 23d (14 million from January 1 to January 23, 2020), respectively (Fig.1) . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint . https://doi.org/10. 1101 Due to Stay At Home Movement, the mean intensity of intra-city activities for 316 cities was 2.61/d during January 24 to February 15, 2020 according to Baidu Migration. It was down 42.42% and 50.27% when compared to the same period last year (4.53/d) and the first 23d (5.25/d), respectively (Fig.3) .
Obviously, COVID-19 greatly reduced the human mobility of China Mainland.
We consider 44 regions in China Mainland which accept travelers from Wuhan, including 29 Provinces (Tibet was excluded since only one confirmed case was reported) and 15 prefecture-level cities in Hubei province. We noticed that the number of confirmed cases between non-Hubei and Hubei were closer in the early period. For example, the number of cumulative confirmed cases by the end of January 26 in Chongqing (non-Hubei) and
Xiaogan (Hubei) were 110 and 100, respectively. Their cumulative confirmed cases by the end of February 27, however, were 576 and 3,517, respectively. We surmise that this is partly because Xiaogan has received more infected cases from Wuhan than Chongqing since the human-to-human transmission risk of COVID-19 was confirmed and announced on January 20. This surmise was confirmed by Fig.4 . The proportion of travelers from Wuhan accepted by Hubei regions to the total travelers from Wuhan increased rapidly from 70% (before January 19) to 74% (January 20), even over 77% after January 26. So we concluded that the first key factor affecting the later (e.g. February 27) cumulative confirmed cases of each non-Wuhan region is the sum immigrants from Wuhan during January 20 to January 26 (few immigrants after January 27). These immigrants had higher infected probability but lower transmission . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint . https://doi.org/10.1101/2020.02.29.20029561 doi: medRxiv preprint ability because the susceptible strengthened protection awareness and measures after the declaration of human-to-human transmission.
The second key factor is the sum number of the infected immigrants from Wuhan before January 19. According to the recent report, there is a mean 10-day delay between infection and detection, comprising a mean about 5 day incubation period and a mean 5 day delay from symptom onset to detection of a case (Imai et al, 2020; Yang et al, 2020; Li et al, 2020) . So the second key factor can be represented with the number of cumulative confirmed cases by the end of January 29. These "seed cases" had higher transmission ability because the susceptible had no any protection measure. A simple regression model was established as follow.
y= 70.0916+0.0054×x1+2.3455×x2 (n = 44, R 2 = 0.9330, P<10 -7 )
Here, y is the number of cumulative confirmed cases by February 27 of each non-Wuhan region, x1 is the sum number of immigrants from Wuhan during January 20 to January 26 of each non-Wuhan region, and x2 is the number of cumulative confirmed cases by January 29 of each non-Wuhan region.
The standard regression coefficients of x1 and x2 are 0.6620 and 0.3796 respectively, indicating that x1 is more important than x2 for determining y. The observed and expected values of the cumulative confirmed cases by is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint . https://doi.org/10.1101/2020.02.29.20029561 doi: medRxiv preprint
To evaluate the effect of Wuhan lock-down, x2, the number of cumulative confirmed cases by January 29, should be fixed. Then we assumed three different lock-down plans besides the real plan (Table 1) . I20 I21 I22 I22 I22 I22 I23 I24 I25 I26 Strict * : none left Wuhan since the date of lock-down. I20 ~ I29 are the real emigrants from January 20 to January 29, respectively.
It's not fair to assess the effort of different regions only based on the final number of cumulative confirmed cases due to the difference of the number of immigrants from Wuhan. The aforementioned interpretative model has taken this difference into account and the results are listed with 5 grade (Excellent, Good, Normal, Poor, Very poor) according to standard residuals (Table 2 and Table 3 ). We emphasize that this is only a preliminary evaluation and the results are for reference only.
The human mobility data on inter-city travel and intra-city activity from January 1, 2020 to February 21, 2020 (including the same period data in 2019) in China Mainland is from Baidu Migration (http://qianxi.baidu.com).
The inter-city travel population of each city is represented with immigration and emigration index, the travelers proportions of different destination and departure, on the level of Province and city (but only the top 100 cities), are also listed. The intra-city activity intensity of each city is represented with the index (not proportion) of . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint . https://doi.org/10.1101/2020.02.29.20029561 doi: medRxiv preprint . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint . https: //doi.org/10.1101 //doi.org/10. /2020 activity population to total population.
Nanjing, Qingdao, Shenzhen and Foshan during the 2019 Spring Festival travel rush (40d) are from their Official website of the Transportation Commission. We estimate one Baidu Migration index is about equal to 56,137 travelers (Fig.6) .
The confirmed case data on each Province and prefecture-level city are from the National Health Commission of China (http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml) and its affiliates.
Baidu Migration recorded more types of spatial displacement, including airplane, high-speed rail, ship, coach and private car, so it theoretically has higher accuracy. The real inter-city travel population of each city, however, is to the last day before lock-down (January 22) is 95.98, then the two model estimated travelers are 2.9 million and 5.4 million, respectively. News Release Conference of Wuhan on 26 January confirmed that more than 5 million people had left Wuhan after 10 January due to the Spring Festival travel rush and epidemic. Our estimated value is closer to official reports.
It is worth noting that the Baidu Migration index still has the following disadvantages for estimating real traveler population. 1) The mobility behavior of a large number of groups un-connected to Baidu Map and third-party . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint . https: //doi.org/10.1101 //doi.org/10. /2020 users has not been recorded. 2) The spatial displacement of users is recorded only within 8 hours. 3) Most of the trips are disassembled and not fully identified. For example, one user travel from A, pass through B and arrive final destination C, and the user also has location information in C by coincidence, then this trip will be disassembled into A-B and B-C. Regardless, big data has and will continue to play an important role in public health. There are 9.48 million residents in Wuhan around January 26, the cumulative confirmed cases are 2,261 by January 29. We estimate at least 56,916 people were infected in Wuhan according to our model (up to February 27 the confirmed cases are 48,137). In other words, more than 8,000 undetected but infected individuals still wait for checking out in the center of epidemic storm. Wuhan still has much to do.
SARS-CoV-2 has diversity transmission approaches, including respiratory (mouth foam) and contact routes which have been confirmed, as well as aerosol and digestive (fecal-oral) routes which cannot be ruled out (National Health Commission of China, 2020). It is also highly concealed according to the recent report of transmission of the virus from asymptomatic and mild individuals (Zhang et al, 2020; Sanche et al, 2020 Yang et al (2020) , respectively. At the beginning of the outbreak, the infected individuals may be greatly underestimated due to the asymptomatic transmissions, insufficient sensitivity of diagnostic reagents and delayed . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint . https://doi.org/10. 1101 diagnosis. The latest estimated value of R0 and the control reproductive number is 4.7-6.6 (Steven et al, 2020) and 6.47 (CI: 5.71-7.23) (Tang et al, 2020), respectively. SARS-CoV-2 is highly contagious, Wang et al (2020) projected that without any control measure the infected population would exceed 200,000 in Wuhan by the end of February. As for Steven et al (2020) and Read et al (2020) , the estimated number was 233,400 (CI: 38,757-778,278) by the end of January and 191, 529 (CI: 132, 649) by February 4, respectively. Up to February 27, the confirmed cases are 48,137.
The only lesson that humans have learned from history is that humans have not learned anything from history.
Clearly, Wuhan has not learned anything from the SARS epidemic in 2003, now she is suffering from her early delays. Fortunately, China Government has implemented a series of large-scale public health interventions to control the epidemic. In fact, many prevention and control measures taken by China, especially Wuhan lock-down, nationwide traffic restrictions and Stay At Home Movement, go far beyond the requirements for responding to emergencies, setting new benchmarks for epidemic prevention in other countries. The Chinese method has proven to be successful. The strategy adopted by China has changed the fast-rising curve of newly diagnosed cases, and the simplest and most direct thing that can explain this is the data (Fig.7) .
The SARS-CoV-2 epidemic is still rapidly growing and spread to more than 42 countries as of February 27, 2020.
At present, the most serious countries outside China are South Korea, Italy, Iran, and Japan. It is worrying that although some measures have been taken, the current prevention and control measures of these countries may still be insufficient. None of them has reached the level of prevention and control in China's moderately affecting . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not peer-reviewed) The copyright holder for this preprint . https://doi.org/10. 1101
|
The Bacille Calmette-Guérin (BCG) tuberculosis vaccine has immunity benefits against non-targeted pathogens 1 , and in particular against respiratory infections caused by RNA viruses like influenza 2 . Since SARS-Cov-2 is also a single-stranded RNA virus, it has been hypothesized that differences in BCG vaccination coverage could explain the wide differences in disease burden observed between countries.
A pioneering preprint paper by Miller et al. found that countries with universal Bacillus Calmette-Guérin (BCG) childhood vaccination policies tend to be less affected by the COVID-19 pandemic, in terms of their number of cases and deaths 3 . While unpublished, this study had a great impact and gave rise to many comments and follow-up studies (reviewed in 4 ). Some published studies were able to replicate this result 5, 6 , but several authors underlined the important statistical flaws inherent to such ecological studies and concluded that randomized controlled trials (RCT) were necessary to address the question 4,7 . As of June 5th 2020, no less than 12 randomized controlled trials (RCTs) studying the protective effect of the BCG against COVID-19 are already registered on https://clinicaltrials.gov/. However, none has a primary completion date earlier than October 1 st 2020, so these RCTs' first results will not be known until at least five or six months. With the epidemic still on the rise worldwide, and in the absence of a SARS-Cov-2 vaccine, there is an urgent need to know if BCG non-specific effects could be harnessed as a substitute prophylactic treatment.
Regression discontinuity (RD) is a method designed by social scientists to assess the effect of an exposure on an outcome. It is deemed as reliable as RCTs to tease out causality from correlation 8 , and typically yields results similar to those obtained in RCTs 9, 10 . In this paper, we applied this method to a rare natural experiment that took place in Sweden. Sweden currently has the 5th highest ratio of COVID-19 deaths per capita in the world. In April 1975, it stopped its newborns BCG vaccination program, leading to a dramatic drop of the BCG vaccination rate from 92% to 2% for cohorts born just before and just after the All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020. . https://doi.org/10.1101/2020.06.22.20137802 doi: medRxiv preprint change 11 . We compared the number of COVID-19 cases, hospitalizations, and deaths per capita, for cohorts born just before and just after April 1975, representing 1,026,304 and 1,018,544 individuals, respectively. Using RD, we were able to show that those cohorts do not have different numbers of COVID-19 cases, hospitalizations or deaths per capita, with a high precision that would hardly be possible to reach with a RCT design.
Regression discontinuity (RD) is a commonly-used method to measure the effect of a treatment on an outcome 13 . It is applicable when only individuals that satisfy a strict criterion are eligible for a policy.
Then, RD amounts to comparing the outcome of interest among individuals just above and just below the eligibility threshold. In this study, RD will amount to comparing the number of COVID-19 cases, hospitalizations, and deaths among individuals born just before and just after April 1st 1975. The effect All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020. . https://doi.org/10.1101/2020.06.22.20137802 doi: medRxiv preprint of universal BCG vaccination for individuals born around April 1st, 1975 was estimated using the stateof-the-art estimator for RD 14 . The estimator amounts to comparing treated and control units, in a narrow window around April 1st 1975. It uses linear regressions of the outcome on birth cohort to the left and to the right of the threshold, to predict the outcome of treated and untreated units at the threshold.
Then, the estimator is the difference between these two predicted values. The estimator and 95% confidence interval were computed using the rdrobust Stata command, see 15 .
This study uses the number of COVID-19 cases per 1000 inhabitants for quarterly birth cohorts born between Q1-1930 and Q4-2001, the number of COVID-19 hospitalizations per 1000 inhabitants for cohorts born between Q1-1930 and Q4-1991, and the number of COVID-19 deaths per 1000 inhabitants for groups of three yearly birth cohorts, from 1930-1931-1932 to 1978-1979-1980 . These variables were constructed using data compiled by the Public Health Agency of Sweden; see the supplementary Table 1 for details.
In an RD design, the presence or absence of a treatment effect can be assessed visually, by drawing a scatter-plot with the variable determining eligibility on the x-axis, and the outcome variable on the y-axis.
If one observes that the relationship between these two variables jumps discontinuously at the eligibility threshold, this is indicative of a treatment effect. Accordingly, Figure 1 shows no discontinuity in the numbers of COVID-19 cases per 1000 inhabitants for cohorts born just before and just after April 1975.
This suggests that universal BCG vaccination has no effect on the number of COVID-19 cases per 1000 inhabitants for individuals born in 1975. Figures 2 and 3 show that similar conclusions apply when one looks at the number of COVID-19 hospitalizations per 1000 inhabitants and at the number of COVID-19 deaths per 1000 inhabitants. The number of deaths per 1000 inhabitants is several orders of magnitudes All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020. . https://doi.org/10.1101/2020.06.22.20137802 doi: medRxiv preprint higher for older than for younger cohorts, so Figure 3 only presents those numbers for cohorts born after 1960 to keep the graph legible.
This visual analysis is confirmed by the statistical calculations. Table 1 inhabitants, there is only two data points to the right of the Q2-1975 threshold. Therefore, the RD estimator cannot be computed for that outcome. Instead, we just compared the number of COVID-19 deaths per 1000 inhabitants in the 1972-1973-1974 and 1975-1976-1977 YBCs using a standard t-test, even though this method does not account for the fact those two groups differ in age, contrary to the RD method. Doing so, we find that the difference between the deaths per 1000 inhabitants of the two groups is not different from 0. Table 1 are intention-to-treat effects 16 : not all Swedish residents born just before April 1975 received the BCG vaccine at birth, and some of those born just after April 1975 received it. In particular, foreign-born residents account for 27•2% of the Swedish population born in 1975 as per Statistics Sweden's data, and they were not affected by the 1975 policy. Among natives, the policy led to a drop of the vaccination rate from 92 to 2% 11 . Assuming that the BCG vaccination rate of foreign-born residents is the same just before and just after April 1975, a reasonable assumption as no other European country changed its BCG vaccination policy in 1975, the policy led to a drop in the BCG vaccination rate All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020. Table 1 into the effect of being vaccinated at birth, one needs to divide the intention-to-treat effects and their confidence intervals by 0•655, the change in the BCG vaccination rate at birth induced by the reform 17 All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020.
In this study, we took advantage of a change in vaccination policy in Sweden to investigate the link between BCG vaccination in infancy and Covid-19 cases, hospitalizations and deaths, using a regression discontinuity approach.
Contrarily to most studies on the question, we compared Covid-19 cases between two very similar groups of people from the same country. This allows us to get rid of all confounders linked to cross-countries comparisons, and of confounders like sex or socio-economics status that are often present in observational studies that do not rely on a quasi-experimental design, unlike ours. Another strength of this study is its statistical precision. Since we could gather nationwide data in a country where Covid-19
rates are high, we are able to reject fairly small effects of the BCG vaccine. Achieving a comparable statistical precision in an RCT would require an unrealistically large sample. Even with a COVID-19 hospitalization rate of 0•5%, as among the elderly Swedish population, a randomized controlled trial that could reject BCG effects larger than 24% of the baseline hospitalization rate, as in our study, would require including around 15,000 participants.
While previous studies mostly addressed differences in BCG vaccination policies but did not account for differences in actual BCG coverage, we work with two populations with well documented and very 11 . Prior to that, Sweden had already eliminated its revaccination program at 7 years of age in 1965. Sweden also stopped its revaccination All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020. Moreover, this study does not measure the COVID-19 immunity benefit conferred by a recent BCG vaccination, as individuals born just before Q2-1975 were vaccinated 45 years ago. The RCTs currently All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020. . https://doi.org/10.1101/2020.06.22.20137802 doi: medRxiv preprint underway will tell if the protective effect of a recent BCG vaccination differs from the effect measured in this study.
Overall, this study shows BCG vaccination at birth does not have a strong protective effect against COVID-19. Thus, it seems that BCG childhood vaccination policies cannot account for the differences in the severity of the pandemic across countries, as had been hypothesized by prior studies 3,5,6 . This advocates for a strict adherence to WHO's recommendation of the vaccine to infants outside of clinical trials 21 , and for restraint from starting new clinical trials on this question. The question is of particular importance for a vaccine whose lengthy production process regularly leads to worldwide shortages with dire consequences on children from country with high prevalence of tuberculosis 22 .
While RCTs will complement this study by measuring the effect of a recent vaccination, this study comes much before results of the RCTs will be made available, and with a greater precision. Finally, it exemplifies the potential of leveraging past medical policies and statistical techniques designed in the social sciences to answer current medical questions. (2), and the 95% confidence interval of this effect is shown in Column (3)
All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020. 1960-1961-1962 to birth cohorts 1978-1979-1980. 1975 , when vaccination at birth was discontinued, is represented by the red vertical line.
All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020. . https://doi.org/10.1101/2020.06.22.20137802 doi: medRxiv preprint Notes: Some QBCs after Q4-1991 had less than 5 hospitalizations, so Folkhälsomyndigheten could not provide their number of hospitalizations due to confidentiality issues.
The Swedish population per QBC is not publicly available, while the population per YBC is. Rather than just dividing the YBC's population by four to recover each QBC's population, we account for the fact that Sweden exhibits a little bit of quarterly birth seasonality. From 1930 to 2001, 25•6%, 27•0%, 24•9%, and 22•5% of births respectively happened during Q1, Q2, Q3, and Q4. To infer a QBC's population, we multiply the population of the corresponding YBC by the proportion this quarter accounts for in the births that took place that year in Sweden. Not all Swedish inhabitants were born in Sweden, so this computation implicitly assumes that quarterly birth seasonality is the same in Sweden as in the countries foreign-born-Sweden-residents immigrated from. Foreign-born individuals only account for 22% of the 1930 to 2001 YBC Swedish population, and quarterly birth seasonality is weak in most countries, so this assumption should not strongly affect the results.
All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 23, 2020. . https://doi.org/10.1101/2020.06.22.20137802 doi: medRxiv preprint
|
measures have been adopted globally to prevent spread within hospitals. Current guidance recommends hand hygiene, personal protective equipment (PPE) and environmental decontamination [2, 3, 4] . There have been studies examining the contamination of environmental surfaces with SARS-CoV-2 virus [5] and modelling the potential risk for the transmission of COVID-19 in healthcare using a surrogate virus [6] . Best practice in the use of single room accommodation and PPE 3 for preventing spread of infection is to don PPE on entering the room and doff on exiting [7] with hand hygiene performed at relevant times according to the WHO 5 moments for hand hygiene. In addition environmental cleaning reduces the risk to staff and future patients. Patients were placed in single en-suite rooms while undergoing assessment for COVID-19 and moved on into 6-bed cohort rooms on specific wards once a laboratory confirmation of the diagnosis was made. Rooms were cleaned with a chlorine dioxide agent (Tristel FUSE®) once daily and again after discharge with a terminal clean using chlorine dioxide followed by UVC disinfection (SteriPro®).
Personal protective equipment including filtering face piece (FFP) 2 or FFP 3 masks, eye protection in the form of goggles or visors, gloves and a long sleeved fluid repellent gown was recommended. In each ward an area was designated as the donning areas for staff confidence. PPE was doffed on leaving the patient room.
PPE was not worn while working in the general ward areas e.g. at nurses' stations.
As the number of COVID-19 cases increased throughout Ireland and internationally, the demand for PPE increased. In order to preserve its use some centres moved to the wearing of PPE throughout ward areas, within and outside of 4 patient rooms, doffing only on exiting the ward in accordance with WHO interim guidance [8] . We theorised that this could increase the risk of spread to healthcare workers due to widespread contamination of ward areas and inadvertent breaches in practice in relation to PPE such that healthcare staff might self-inoculate with SARS-CoV-2 once outside of the patient room in contaminated PPE.
We undertook this study to demonstrate that the infectious COVID-19 patient readily contaminates the patient area but that the combination of infection patients. (Table I) Surface and air samples were analysed for the presence of SARS-CoV-2 RNA by molecular testing using the Cepheid Xpert Xpress SARS-CoV-2 assay (Cepheid AB, Solna, Sweden) under Emergency Use Authorization. The Xpert test platform integrates specimen processing, nucleic acid extraction, reverse transcriptase polymerase chain reaction amplification of SARS-CoV-2 RNA, and amplicon detection in a single cartridge. The assay amplifies 2 nucleic acid targets, namely N2 (nucleocapsid) and E (envelope) wherein N2 is more specific for SARS-CoV-2.
Results and Discussion 6 Eighty-one surface swab samples were retrieved for the purposes of the study, 26 from within COVID-19 patient rooms, 25 from COVID-19 patient rooms after discharge and following completion of terminal cleaning and disinfection with additional UVC decontamination and 30 from nurses' stations.
Testing of the patient room indicated that the patient easily contaminated the area, with almost half of the tests detecting SARS-CoV-2 (11/26, 42.3%). These areas may have been contaminated by coughing, as they were all in close proximity to the patient's bed or chair, or by spread from the patient's contaminated hands. The remote controller for the bed in two rooms was the most frequently positive site (2/2, 100%). These were located in rooms in the ICU, where the remote is an area frequently handled by staff members while caring for the patient. The bed side rail was the second most frequent (4/6, 66.7%), another area touched frequently by both patient and staff. The handle of the en-suite bathroom door was not found to be contaminated in any of 4 en-suite rooms tested. This may be due to the fact that it was located more than two metres from the patient or that the patient was too unwell to mobilise to use the bathroom.
There was just one positive test among the samples taken from cleaned rooms. One call bell was found to be contaminated but was also noted to be in disrepair and unlikely to be easily cleaned. In addition, its placement in the room was beyond the reach of the UVC. Replacement and alternative location was immediately recommended. Subsequently all wards were instructed to review equipment condition and cleaning protocols. One telephone at a nurses' station tested positive for SARS-CoV-2 whereas all desk and computer keyboard samples returned negative results. This might suggest that contamination arose due to the respiratory droplets of an infected staff member rather than transfer of patient virus from the contaminated patient room.
The total number of air samples taken was 16, 8 from patient rooms, 8 from corridors of COVID-19 wards. 1 control sample of VTM was reserved as a negative control sample for laboratory testing. None of the air samples taken yielded positive results. While this might reassure us that the virus was not airborne, the absence of 7 a positive control such as a positive result even in near patient testing prevents us from drawing any firm conclusions as we could not validate our sampling method.
The hospital environment has long been identified as a source of transmission of other infections within hospitals [9] . Placing patients in accommodation isolated from those without infection, hand hygiene, wearing of appropriate personal protective equipment and thorough environmental cleaning and disinfection have all been recognised as important interventions to prevent and control the spread of infections in hospital. It follows that the same measures be put in place to prevent spread of COVID-19. This study demonstrates that these measures effectively prevented spread of SARS-CoV-2 from contaminated patient rooms to general ward areas. This will inform future management of COVID-19 in the event of resurgence as well as other emerging infectious diseases.
|
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Unspecific amplifications were found in 56.4% (495 reactions) of negative samples for SARS-CoV-2. In silico analysis of N2 primers-probe and gel electrophoresis showed dimer formation. Optimization of RT-qPCR conditions reduced the dimerization events. Conditions must be adjusted to avoid extensive test repetition and waste of resources. According to CDC protocol (CDC, 2020), the single amplification of any of the specific virus targets (N1/N2) results in "inconclusive result", and the recommendation is to repeat the analysis. As observed in routine assays, late N2 amplifications frequently appears in negative samples turning it "inconclusive". Thus, we decided to study the late unspecific amplifications in negative samples when using the CDC 2019-nCoV RT-qPCR protocol and to propose alternatives to reduce these events.
RNA was extracted from samples collected from nasopharynx and oropharynx using Material 1) . A partial sequence homology between N2 probe-probe (selfdimer, Figure 1E ), N2 forward primer-probe and N2 reverse primer-probe (hetero-dimer, Figure 1F and G, respectively) of more than 3 base pair were observed, with Delta G values of -13.09, -8.98, and -9.89 Kcal/mol, respectively. Moreover, there is N2 forward primer-probe homology at the 3' end of the sequence (Fig. 1F ). Hairpin loops in N2
primers and probe showed Delta G values close to zero. Interestingly, we observed fragment size of less than 50bp in negative samples ( Figure 1C , lanes 3, 4, and 5). As the expected fragment of 2019-nCoV_N Positive Control and positive samples is 72bp
( Figure 1C , lanes 2 and 6, 7, respectively), PCR products <50bp confirm the dimerization of primers and/or probes.
In order to minimize the dimer formation in N2 reactions, we tested different RT-qPCR conditions, including concentration of primers (133, 213, 266, 320, and 400nM), probe (33, 54, 67.5, 81, and 100nM), and MgSO4 (3, 4, 5, 6, and 6.5mM appear to be exclusive to nucleocapsid targets. Unspecific signals at late cycles in the envelope protein gene (E target) assay using the Charité protocol (Konrad et al., 2020) and mismatch of primers sequences (Pillonel et al., 2020) were recently reported. The scientific community is discussing the technical limitations in the SARS-CoV-2 RT-qPCR protocols (Marx, 2020) and its optimization is still underway.
It is known that primers-probe set are the pivotal component of a qPCR assay (Bustin and Huggett, 2017) and if the dimerization occurs in a staggered manner, some extension can occur and become more abundant as cycling progresses. The in silico analysis showed that probe-probe self-dimerfor instance -had potential to bind at the 5'and 3' ends, requiring high amount of energy to fully break a secondary DNA structure (Delta G=-13.09Kcal/mol). Despite this, the optimization presented here has drastically reduced the dimerization events.
The main concern with primer-dimers formation is that they may cause false-positive results. In our experience, 56.4% of the not detected reactions (negative samples) showed the late unspecific amplification. The strict adjustment in the RT-qPCR conditions carried out in the present study was decisive for the optimization of the reaction and reduction of the occurrence of late unspecific amplifications (11.5%). In addition, the single amplification of any of the specific virus targets (N1/N2)even though it is an unspecific J o u r n a l P r e -p r o o f amplification -results in "inconclusive result" that require repetition of tests increasing the costs and generating delays in results or even unnecessary requests for new samples.
There was no decrease in the efficiency of reactions in positive control and positive samples, even though the dimerization events on N2 primers-probe set is suggested. This may be due to preferential annealing of the primers and probe to cDNA template of positive samples, which occur in earlier cycles of PCR (cycles 10-30, depending on the amount of viral genetic material). Although the detection of SARS-CoV-2 in positive samples seems not to be affected by unspecific signals, it has great importance in negative samples assessment, leading to inconclusive results.
Finally, we recommend RT-qPCR users to adjust primers-probe and magnesium concentration, RT step duration, and thermal cycle temperature, independently of the Master mix kit used, to minimize dimer formation and to avoid extensive test repetition and the waste of resources.
The authors declare that they have no competing interests
This work was supported by Universidade Federal de Juiz de Fora (UFJF).
|
The outbreak of the 2019 novel coronavirus disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was started in Wuhan, China, since December 2019, and rapidly spread to the rest of the country. COVID-19 also occurred in other countries and become a global threat. Confirmed cases of COVID-19 all over the world have surpassed 690 000 with over 33 000 deaths by March 31st, 2020.
The genome of SARS-CoV-2 shares about 80% sequence identity with other two coronaviruses causing emergent public health events in recent decades, severe acute respiratory syndrome coronavirus (SARS-CoV) and Middle East respiratory syndrome virus (MERS-CoV) [1] . The epidemiology of COVID-19 is similar to that of SARS and MERS with droplets and close contact as the main transmission route [2] . Based on the understanding of SARS and MERS, control measures provided by World Health Organization include maintaining distance to avoid close contact. Although fatality rate of SARS-CoV-2 seems to be lower than that of SARS and MERS, more people are threatened by COVID-19 because of much higher infectious capability [3] . Large number of specific populations such as women in the breastfeeding period suffer from the threat of the disease. Previously, a study involved nine pregnant women confirmed with COVID-19 and their infants showed no direct evidence of maternal-infant transmission, but little was reported on how breastfeeding would affect SARS-CoV-2 transmission [4]. Breastfeeding involves a serious of intimate behaviors including skin-to-skin touch and inadvertent cough or sneeze between mothers and infants. Guidelines on breastfeeding in women with COVID-19 are controversial.
The guideline for pregnant women suspected with SARS-CoV-2 suggested isolation of mothers from infants without breastfeeding until viral shedding is clear [5] in consideration of possible All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 1, 2020. However, there is no enough evidence for mother-to-child transmission (MTCT) up to now. Royal College of Obstetricians & Gynaecologists (RCOG) suggests breastfeeding since they believed the benefits of breastfeeding outweigh the potential risks of MTCT [7] . Therefore, the safety of breastfeeding in women with COVID-19 is worth exploring. In this study, we performed a preliminary study to evaluate the safety of breastfeeding in women infected with SARS-CoV-2. . The presence of SARS-CoV-2 nucleic acid in throat swabs or blood was determined by quantitative real-time polymerase chain reaction. The detection of both IgM and IgG for SARS-CoV-2 from break milk or patients' serum was performed by ELISA. Suspected cases were defined as those with typical changes in chest CT scan of viral pneumonia, but with negative etiologic or serological evidence All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 1, 2020.
All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 1, 2020. Continuous variables were presented as mean ± standard deviation and compared using the Student's t-test. Categorical variables were expressed as number (%) and compared using chi-square or Fisher's exact test as appropriate. P value less than 0.05 was considered statistically significant. Statistical analysis was done in SPSS version 21.0.
Diagnosis for COVID-19 of patients were confirmed in the third trimester (20 in 23, 87%) and puerperal period (3 in 23, 13%). Age range of the patients was 21-40 years with an average of 29.2 ± 4.9 years. The gestational week on delivery was ranged from 34 weeks plus 2 days to 41 weeks plus 3 days. Clinical features of these patients on admission were listed in Table 1 .
None of the patients reported to have direct contact with Huanan Wholesale Seafood Market.
In total, six patients who were all from the confirmed group reported that they had a clear contact history with confirmed COVID-19 patients. Fever was the most common complaint when patients were admitted, which had a significantly higher rate in confirmed group than in suspected group (71.4% vs 11.1%, p<0.05). Other symptoms of an upper respiratory tract infection were also reported without statistical difference: eleven patients with cough (nine in confirmed group and two in suspected group), three with myalgia (all from confirmed group) and two with dyspnea (all from confirmed group). One patient presented diarrhea from confirmed group, which is an atypical symptom of COVID-19. Notably, eight patients were asymptomatic, and the rate of asymptomatic patients was significantly lower in confirmed group than that in suspected group (14.3% vs 55.6%, p<0.05).
All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 1, 2020. . https://doi.org/10.1101/2020.05.30.20033407 doi: medRxiv preprint
Most cases were terminated with cesarean section. None of the patients required mechanical ventilation. Six (42.9%) confirmed patients and 2 (22.2%) suspected patients received antepartum antiviral therapy. Ten (71.4%) confirmed patients and four (44.4%) suspected patients received antiviral treatment after delivery. Six (42.9%) confirmed patients and one (11.1%) suspected patients received antepartum glucocorticoid therapy. Ten (71.4%) confirmed and three (33.3%) suspected patients received hormone after delivery. There were no statistical differences in these treatments between the two groups.
All the pregnancies were singleton. The average birth weight was 3173.9 ± 470.7g with no clinical manifestation of neonatal asphyxia. SARS-CoV-2 detection of throat swab was performed in fifteen neonates at birth and in six neonates in neonatal intensive care unit (NICU) after birth.
All the results of SARS-CoV-2 testing in neonates were negative. Clinical features of neonates were displayed in Table 2 . Feeding patterns and health conditions of the infants were followed by March 27, 2020 ( Table 2 ). All infants in confirmed group were discharged from NICU, and had no pneumonia-related symptom. Six infants were fed with whole or partial breast milk. Eight infants (five in confirmed group and three in suspected group) received antibody testing one month after birth. The testing was not performed in other fifteen infants because it was rejected by their parents or the infants were under one month. The results of IgM or IgG detection in infants were all negative ( Table 2) .
Most of the breast milk samples were collected within one week postpartum. Samples from two patients were collected on the 15th and 12th day after delivery, respectively, since the puerperae were diagnosed as COVID-19 after delivery. In confirmed group, six samples were collected at the time when SARS-CoV-2 detection in throat swab became negative. The results of All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 1, 2020. . https://doi.org/10.1101/2020.05.30.20033407 doi: medRxiv preprint SARS-CoV-2 nucleic acid detection in all breast milk samples were negative (Table 3) . Testing of IgG and IgM for SARS-CoV-2 in breast milk and maternal blood was performed in seven patients (four confirmed and three suspected) (Table 4 ). IgM antibody was present in all four confirmed patients (100%) and one suspected patient (33.3%), which was in accordance with IgM detection in the maternal blood. IgG antibody was all negative whatever it was positive or negative in maternal blood (Table 4 ).
In this study, we reported a cohort of twenty-three puerperae confirmed or suspected with COVID-19. The clinical features of these patients showed similar patterns with other patients reported [3]. Fever and cough were the most common symptoms. Gastrointestinal symptom was rare but reported in our cohort. It has been suggested that pregnant women and puerparae within one month after delivery are at greatest risk for respiratory infectious disease like influenza [9].
Cohort studies from SARS and MERS reveal high mortality and increased requirement for mechanical ventilation and ICU admission even when the termination of pregnancy was performed [10, 11] . Severe patients with multiple organ dysfunction syndrome (MODS) and high incidence of severe neonatal asphyxia were reported in pregnancy with COVID-19 [12] . Pregnant outcomes were mild in our study without maternal or fetal/neonatal death, and the medical conditions of the puerperae were stable, which were consistent with the recent reported studies [4, 13] .
Human-to-human transmission of SARS-CoV-2 occurred mainly via respiratory droplets [2] . All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 1, 2020. Primary epidemiological feature of COVID-19 is familial cluster [14] . Unlike early reported cases, proportion of patients with contact history of Huanan Wholesale Seafood Market or intimate interactions with confirmed patients was much smaller [4] . In consideration of these information, special attention should be paid to asymptotic case in our cohort. These patient were screened out by chest CT scan before their admissions in other hospitals, and two of them were confirmed as COVID-19 via throat swab specimen detection. Since SARS-CoV-2 is able to be transmitted by asymptomatic carriers [15] , routine screening of COVID-19 should be performed before planned admission of pregnant women during epidemic situation for the safety of other inpatients and health care workers. Compared with nucleic acid testing, chest CT scan is more convenient and feasible in most medical institutions with a relative high sensitivity in distinguishing COVID-19 from viral pneumonia [16] . Therefore, chest CT scan should be considered as a primary screening tool for COVID-19 detection before admission of pregnant women. Although the radiation exposure through single examination is much lower than estimated dose for fetus harm, chest CT scan in indicated pregnant patients should following As Low As Reasonably Achievable (ALARA) principle [17] .
With extensive progress in the study of SARS-CoV-2, the presence of the virus was verified in patients' body fluids such as urine and blood besides specimens from respiratory tract [18] . Thus, other possible transmission routes are worth exploring. Presence of viral nucleic acids of SARS-CoV-2 detected in fecal samples points out the probability of fecal-oral transmission in COVID-19 [19] . Negative results of SARS-CoV-2 detection in amniotic fluid and umbilical cord blood indicate a less probability of intrauterine transmission from mother to fetus [4]. However, recent researches reported IgM and IgG antibodies for SARS-CoV-2 were detected in some All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 1, 2020. Breastfeeding contributes to the health and well-being of both mothers and infants. Early initiation of breastfeeding protects the newborns from acquiring infection and reduces infant mortality [21] , especially in premature infants [22] . WHO suggests breastfeeding should be started within one hour after birth and extended until two years old. However, the recommendation becomes complicating when the mothers suffer from infectious diseases. On one hand, immunomodulatory proteins in human milk provide anti-infective benefits for infants, particularly protection against virus causing respiratory and gastrointestinal tract diseases [23] . On the other hand, breast milk may be contaminated with virus such as human cytomegalovirus (HCMV), hepatitis B virus (HBV) and human immunodeficiency virus (HIV), indicating possible risk of MTCT via breastfeeding [24] . Numerous studies have been done in the field of investigating breastfeeding in mothers with viral infection, and the safety of breastfeeding has been reported that no additional risk of MTCT in mothers with HBV or HIV [25, 26] . Continuation of breastfeeding is also recommended in mothers infected with H1N1 influenza [27] . Therefore, breastfeeding should not be simply forbidden in mothers infected with SARS-CoV-2 unless potential risks overweigh advantages of breast feeding. Even with a small number of cases, this result strongly indicates that breast milk may not be a All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 1, 2020. . https://doi.org/10.1101/2020.05.30.20033407 doi: medRxiv preprint transmitting vector for SARS-CoV-2, thus breastfeeding is possible for mothers with COVID-19 theoretically. Furthermore, due to the ability of IgG to cross the placenta, the presence of IgG for SARS-CoV-2 in blood samples of mothers before delivery imply the possibility of antepartum protection for fetus. The presence of IgM for SARS-CoV-2 in maternal blood samples was consistent with the presence of IgG. IgG and IgM in breast milk are produced by different mechanisms. Antibody level of IgG and IgM in breast milk were lower than that in maternal blood [28] . IgM antibody of infectious diseases in breast milk is able to provide protection of the same pathogens to infants[29] and inhibit entry and transport of virus such as HIV to infants [30] . We inferred that postpartum protection by antibodies may also exist in SARS-CoV-2 infected mothers to their infants via breastfeeding.
All the infants were in healthy physical conditions by the end of follow-up (March 27, 2020).
The rate of breast/mixed feeding was much lower in confirmed group than that in suspected group according to our follow-up. Only one confirmed patient is giving breastfeeding to her infant. The less occasion of breastfeeding in mothers with COVID-19 is due to the cautious suggestion given by COGA under the circumstances that the safety for breastfeeding was uncertain. Besides this case in the study, another patient with confirmed COVID-19 discharged from our hospital has started breastfeeding from middle of February, and no clinical manifestation of pneumonia was developed in her infant (data not shown). The patient was not included in this cohort because we failed to collected her breast milk during her hospitalization. However, due to the lack of evidences from larger sample size in breastfeeding practice, mothers with COVID-19 who have the intention of breastfeeding should be informed of current dilemma of breastfeeding with COVID-19 and possibility of transmission by close contact. Several measures can be performed to All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 1, 2020. . limit viral spread as RCOG recommends, such as hand washing before feeding the infants, avoiding coughing or sneezing during feeding and wearing a face mask. Expressed milk for bottle feeding by uninfected relatives may also be an alternative. Furthermore, antiviral remedy should be reconsidered if the mothers are willing to start or continue breastfeeding after confirmed or suspected with COVID-19.
In summary, our study proposed for the first time the feasibility of breastfeeding in women infected with SARS-CoV-2. Taking the potential benefits and risks into account, breastfeeding is encouraged if there is no other medical contradiction. The study was preliminary with a small sample size and short interval for medical observation. The safety of breastfeeding should be proved by further study.
We declare no competing interests (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted June 1, 2020. All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Table 3 . SARS-CoV-2 detection in breast milk and throat swab.
All rights reserved. No reuse allowed without permission.
(which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
|
• de fournir les moyens nécessaires aux équipes de modélisation mathématique des épidémies afin d'organiser la sortie de crise ; • de mettre des automates à la disposition des personnels soignants pour réduire ou éliminer leurs risques de contamination dans l'exécution de certaines tâches ; • d'engager une réflexion sur l'utilisation du numérique dans la réponse aux futures épidémies et dans l'organisation du système sanitaire en périodes interépidémiques ; • de veiller au respect des principes éthiques et de protection des données personnelles dans l'usage des outils numériques.
Les auteurs déclarent ne pas avoir de liens d'intérêts.
|
Coronaviruses are a large family of zoonotic RNA betacoronaviruses that mainly circulate among animals including mice, pigs, bats and avian hosts [1] , but it has been known since the 1960s that they can infect humans too. They belong to the family of coronaviriade and have a large genetic diversity which, during viral replication, generate sub-genomic RNAs leading to an increase in the coronavirus species [2] . Also, human to human transmission has been reported [3] .
Novel viruses are often associated with human outbreaks. There are already seven coronaviruses known to infect humans: OC43, SARS (Severe Acute Respiratory Syndrome Coronavirus), HKU1, 229E, NL63, and MERS (Middle East Respiratory Syndrome Coronavirus) [4, 5] . The seventh strain emerged in China, whose epidemic started in Wuhan, on December 12th, 2019, from a local fresh seafood market [1] and was designated severe respiratory syndrome coronavirus 2 (SARS-CoV2-2) which causes the coronavirus decease 19, COVID-19 [3, 6] .
Cadavers can always pose biological hazards to forensic scientists, including hepatitis C, HIV infection, Middle East respiratory syndrome (MERS), hemorrhagic fever viruses such as Ebola, meningitis and now also Sars-Cov2. The pandemic of the new coronavirus decease, as of April 14, 2020, has reached 185 Countries with 1,920,918 deceased [7] . According to our best knowledge, there are no cases reported of an infected medical examiner after an autopsy of a COVID-19 [8] , and no identification autopsy has yet been reported with suspected or confirmed SARS-CoV2 positive human remains.
Nevertheless, considering the COVID-19 outbreak and the declaration of a pandemic on the March 11, 2020 by the World Health Organization and the increasing number of deaths, we have to consider this potential infectious risk for forensic pathologists and odontologists.
This short report provides specific recommendations to forensic odontologists in terms of biosafety and infection control practices during the post mortem dental data collection of unidentified human remains without any known medical history data.
A dental autopsy should be performed when identity is unknown, limiting the number of personnel working on the autopsy room to three people: three odontologists or two odontologists and one dental hygienist with forensic background. Alternatively, a forensic pathologist, may assist the dental autopsy, too. Immunosuppressed or high-risk autopsy personnel should not participate. The authors advise odontologists to always discuss the case with the forensic pathologist in charge before starting the dental autopsy.
The infectious nature of case should be determined before any post mortem dental data collection, starting from the available history given by the police on the circumstances of the recovery of the body and as per Centers for Disease Controls and Prevention, 2020, the collection by the medical examiner of specimens for SARS-CoV-2 testing: nasopharyngeal swab and oropharyngeal swab [9] .
It is well known that infection can be spread either by aerosol or directly through cuts and puncture wounds. When such cases are unsuspected or undiagnosed before death, it can be hazardous to the forensic pathologist, odontologist, technicians, and other personnel present in the mortuary. The most important aspect of protection for dental autopsy personnel is the correct use of personal protective equipment and training prior to conducting any autopsy and dental autopsy [9e12]. Full PPE and necessary equipment should be to hand in order to avoid leaving the mortuary area. These are the universal precautions and PPE recommended: -Wear surgical uniform; -Over the scrubs wear a long-sleeved waterproof or fluidresistant gown to protect chest, arms and legs; -A disposable apron covering chest and legs over the waterproof gown; -Double non sterile gloves (preferably nitrile gloves); gloves must extend to cover wrists; the second nitrile gloves can be changed frequently, if needed; -Wear heavy-duty gloves over the first nitrile gloves (if post mortem dental data collection involves checks' cuts); -Consider using a whole-body suit; -Use goggles and a plastic face shield or face mask to protect the face, eyes, nose, and mouth; -Class 3 or Class 2 filtering face masks (certified disposable N-95 respirator or higher, FFP2, FFP3). Surgical masks do not provide adequate protection but can be worn over the an FFP2 mask, but FFP3 masks are preferred; -Rubber boots and waterproof shoe protectors; -Surgical cap.
The precautions listed above may exceed the capabilities of under-staffed or overwhelmed forensic facilities. In this event, forensic odontologists must protect at least eyes, mouth and hands with two physical barrier (two pairs of gloves; googles or glasses and a face shield; multiple filtering masks). Where there is nonavailability of PPE, the dental autopsy must not be performed and postponed.
To prevent exposure of the eye mucosa by any accidental splashing, goggles or face shield should fit the contours of the user's face. Ears and nose orifices and wound openings, like tracheostomy opening, should be packed using cotton or gauze dipped with disinfectant [13, 14] . The splashing of water or fluids is to be strictly avoided while performing dental autopsy. When necessary, wipe the shield with a wet gauze to enhance visibility. Dental autopsies have no aerosol-generating procedures, but require instruments, photographic and radiographic equipment. Caution should be exercised while using any sharp instruments, and only one odontologist must be allowed to perform any cuts on the human remains.
Dental radiography with portable equipment must be performed but should limit the potential for staff exposure to COVID-19 [15] . To reduce the time of the dental autopsy, periapical Xrays should be limited to sound teeth for age estimation, treated teeth, teeth with decay, edentulous areas and any region with unique findings. All photography and radiography equipment should be covered with waterproof material such as plastic sheets to minimize contamination. The disinfection of such equipment is paramount.
Gloves should be changed before using any photography or Xray equipment. To make the process convenient, it is highly recommended that a clean (uncontaminated) PPE protected personnel assist in photography and radiography. This will allow the odontologist to complete dental examination without any break during the process, also minimizing the risk of skin contamination while removing and wearing gloves multiple times. When the case is SARS-Cov-2 confirmed is highly recommended to avoid any dental specimen collection, unless otherwise requested by the medical examiner for DNA sample.
After the dental autopsy, keep the ventilation active and remove all PPE before leaving the autopsy suite, then follow appropriate waste disposal requirements. After removing PPE, hands and contaminated skin surfaces should be thoroughly washed with soap and water for 20 s avoiding any splashing, whenever changing gloves and before leaving the autopsy room. If water is not available, an alcohol-based hand sanitizer that contains 60%e95% alcohol must be used and avoid touching the face with unwashed hands.
As per guidelines issued by the Indian Ministry of Health & Family Welfare, 2020 [14] , reusable clothing can be removed from the autopsy suite and will be laundered according to routine procedures. Besides washing and cleaning other dental autopsy instruments, all the surfaces, and transport trolleys should be properly disinfected with soap and water, and then disinfected with a disinfectant for at least 20 min in concentration of 0.5e1% sodium hypochlorite solution is to be followed by autoclaving of instruments. Other common effective hospital disinfectants are ethanol (62e71%) or hydrogen peroxide (0.5%). Cameras, telephones, laptops and X-ray portable devices once the protection film is removed, should still be treated as if they are contaminated and handled with gloves. All these items must be wiped with appropriate disinfectant.
It is known that SARS-CoV-2 persist on surfaces for days [16] , and persist in the nasal cavity for 3 days after death [17] and for this reason it is the possible that the virus persists on the bodies of the deceased, too. As a consequence, unidentified human remains must be handled safely during transportation, storage, autopsy and burial/cremation [18] . It must be stressed that an autopsy of unidentified person whose dead is due to COVID-19 should be performed only for forensic reasons [19] or identification purposes. On the other hand, identification process of COVID-19 cases should always follow proper management and humanitarian principles, adopting the entire set of universal precautions and recommendations described. The respecting of safety precautions can minimize risks, and it is unethical to refuse to perform a dental autopsy where requested by the medical examiner, except when there is non-availability of PPE or the odontologist himself is at high risk due to health issues. The identification process relies not only on dental post mortem data but also on DNA collections, which has been collected by the forensic pathologist. These two primary identifiers, DNA and dental, should both be considered when performing an identification autopsy, but when a dental autopsy is too risky and/or too labor intensive, DNA can be considered a stronger substitute for the identification [20] .
Given the current spread of COVID-19, all autoptic procedures, including dental autopsies, must assume human remains are potentially infected [21] . Odontologists should be aware that the PPE will inevitably reduce the ability to perform fine motor skills and if there is a lack confidence or inadequate training, dental post mortem collection should be performed by a more experienced colleague. Forensic dental identification process of suspected or positive COVID-19 cadavers should balance the protection of the personnel involved and the need of ensuring dignity to the human remains, but best practice in human identification requires the collection dental and dental radiology data [22, 23] . Forensic odontologists and dental hygienists involved in autoptic procedures of unidentified human remains infected with COVID-19 must be well trained in infection prevention control practices and for the task of managing the dead in challenging circumstances [12, 14] .
The preparation of the body for funeral must finally be discussed with the medical examiner in charge, also considering cultural and religious practices. This should be in line with the directives issued by governing body in the relevant country. It is recommended that processed human remains shall be disposed without any embalming and preferably as soon as practicable directly from the mortuary to the burial or cremation. For best management of human remains, single burial should be preferred to cremation [12] .
The current spread of COVID-19 all autoptic procedures, including dental autopsies, must assume human remains are potentially infected. Risk should not prevent us from applying best practice in human identification through the collection of primary identifiers, fingerprint, DNA and dental data. To balance safety and the respect of the human rights of the dead, strict infection and safety protocols must be applied through planning, training, preparation and experience of all personnel entering the autopsy suite. Forensic odontologists and dental hygienists involved in autoptic procedures of infectious human remains should always be well trained in infection prevention control practices and management of the dead in challenging circumstances.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
Coronavirus disease 2019 (COVID- 19) is an infection of the respiratory tract caused by a newly emerging coronavirus, SARS-CoV-2, which was first identified in December 2019 at Wuhan, China. Based on genetic virus sequencing, SARS-CoV-2 is a beta-coronavirus closely related to SARS virus.
The rapid spread of this diseases has quickly progressed to a global health concern. As of August 1st, 2020 more than 17.5 million people have been infected worldwide [1] . Commonly reported presenting symptoms of COVID-19 are fever, cough and shortness of breath. Less frequently reported symptoms are muscular aches, anorexia, malaise, sore throat, nasal congestion, dyspnea, and headache [2] . Lungs appear to be the primary organ involved in COVID 19 with a range of disease severity including asymptomatic subclinical infection, severe pneumonia progressing and acute respiratory distress syndrome [3] . Fortunately, the majority of patients with COVID-19 develop mild or uncomplicated illness; however, around 14% develop serious illness that requires hospitalization and oxygen supplementation and 5% require admission to critical care facilities [4] . Complications of severe COVID-19 infection include acute respiratory disease syndrome (ARDS), septic shock and multi-organ failure which may characterize by acute kidney and cardiac injury [4] . Currently, diagnosis is through SARS-CoV-2 real-time reverse transcriptase polymerase chain reaction diagnostic panel using upper and lower respiratory specimens [5] .
Cardiac injury is a common condition among COVID-19 hospitalized patients [6] and is associated with an increased risk of in-hospital mortality [7] . A recently published systemic review [8] looked at more than 10 studies from Italy, USA, and China, has highlighted that myocardial injury is not uncommon in setting of COVID-19, and can lead to higher mortality in hospitalized patients. Cappannoli et al. [9] has described in a review that how patients with COVID-19 may share many characteristics with patients who have CVDs, making it often difficult to differentiate some clinical manifestations while explain why this infection is more severe in patients with underlying cardiovascular risk factors. The exact pathophysiological mechanisms that lead to myocardial injury caused by COVID-19 are not well understood [6] . The postulated mechanisms include direct damage to the cardiomyocytes, systemic inflammation, myocardial interstitial fibrosis, interferon mediated immune response and exaggerated cytokine response by Type 1 and 2 helper T cells, in addition to coronary plaque destabilization, and hypoxia [6, 7] . Cardiac Injury may manifest as severe myocarditis with reduced systolic function, and elevated troponin hs-TNI [10, 11] . Patients with pre-existing cardiovascular diseases are more prone to SARS-CoV-2 infection and they are more likely to develop stormy course [12] . Moreover, cardiovascular related comorbidities were attributed to severe COVID-19; these are arrhythmia, hypertension, cardiomyopathy and coronary heart disease [6, 12] . Furthermore, there is a growing evidence in recently published studies to support that patients developing myocardial injury from SARS-CoV-2 are at a higher risk of in-hospital mortality [13] . Shi et al. [14] reported that among 416 patients with confirmed COVID-19, 19.7% developed cardiac injury during hospitalization; notably, mortality rate was higher in patients with cardiac injury compared with those without.
Cardiovascular diseases (CVDs) accounted for huge burden to healthcare systems worldwide [15] . In Oman, CVDs are the leading cause of death and were linked to significant morbidity [16] . In a report from the ministry of health in 2013, 30% of deaths reported were related to CVDs [16] . In this study we aim to describe characteristics of patients with COVID-19 and cardiac injury, and to examine the potential higher mortality in this high risk population.
This retrospective study was conducted with approval from research ethical and administrative committees' at participating centers. Data was obtained from subjects records preauthorized to be accessed for research purposes. Approvals were obtained from the following bodies; The Scientific Research Committee at the Royal Hospital, Muscat, Oman (Approval No. SRC#46/2020 dated May 10th, 2020) and Forces Medical Services Medical Ethics Committee -Armed Forces Hospital, Muscat, Oman (Approval No. FMC-MEC 001/2020, dated May 4th, 2020 citing committee meeting on April 30th, 2020). We included Patients, above the age of 14 years, hospitalized with laboratory-confirmed COVID-19 between March 11th, 2020, and June 27th, 2020, to any of two tertiary care centers at Muscat, the capital major metropolitan area in Oman with the highest prevalence of COVID-19 in the country. One of participating center was assigned to receive patients with severe COVID-19 and the other received unclassified cases. World Health Organization interim guidance was used to diagnose all enrolled patients in our cohort. A total of 143 patients were identified and all were included. Electronic medical records were reviewed for demographic characteristics, clinical data (symptoms, comorbidities, laboratory findings, treatments, complications, and outcomes), and laboratory tests. Cardiac injury defined as blood levels of cardiac biomarkers (hs-TNI) above the 99thpercentile upper reference limit, regardless of new abnormalities in electrocardiography and echocardiography. Data was analyzed using JMP Pro 14 and SAS software (2018 SAS Institute Inc., Cary, NC) and presented. P value of < 0.050 was considered statistically significant.
Total of 143 patients hospitalized with confirmed COVID-19 to participating centers during the study period between March 11, 2020, and June 27, 2020 were included. Of them, 49 (34.3%) had underlying CVD including hypertension, coronary heart disease, rhythm disturbance or cardiomyopathy and 31 patients (21.7%) had cardiac injury as indicated by elevated hs-TNI levels and 112 patients (78.3%) had no elevation in hs-TNI recorded. Age of study population was 49.36 ± 15.32 years and 124 (86.7%) were male. Fever was the most common reported symptom (129 patients [84%]). Cough, shortness of breath, diarrhea, chest pain, and sore throat were presenting symptoms for 99 patients (69%); 76 patients (53%), 24 patients (17%), 21 patients (15%), and 15 patients (10%) respectively. Less commonly reported symptoms included: Nausea and/or vomiting ( Obesity, which is defined as body mass index (BMI) > 30, was noted in 23 patients (16%). Of these 143 patients, 6 (4.2%) and 3 (2.1%) had coronary heart disease and cerebrovascular disease, respectively. The proportion of chronic heart failure, chronic renal failure, chronic obstructive pulmonary disease, smoking and cancer, was 4.2% (5 patients), 8.4% (12 patients), 7% (10 patients), 4.9% (7 patients), and 2% (3 patients), respectively. Table 1 summarizes these findings.
Compared with patients without myocardial injury, patients with cardiac injury were older (median [range] age, 61 [33-89] years vs 44 years; P < 0.0001). Fever, followed by cough, was the leading symptom in the both cohorts. Interestingly, chest pain as a presenting symptom was higher although not statistically significant comparing non-cardiac injury group to the cardiac injury group (15.3% vs 12.9%, P 0.749). Moreover, patients who developed cardiac injury compared with those without cardiac injury had significantly more comorbidities ( ). Anti-interleukin drugs such as tocilizumab and anakinra were used in 48 (33.5%) and 3 (2%) patients respectively. Plasma exchange was used in 27 patients [ 1 8 . 8 % ] ) . T a b l e 3 s u m m a r i z e s t r e a t m e n t s a n d interventions.
Notably Figure 1 illustrates these findings.
We looked into cardiac events during the course of hospitalization of the study population as well as comparing these events in those with cardiac injury and those without, including arrhythmia and ST-elevation myocardial infarction. Tachy-arrhythmias were the most common rhythm disturbance reported. Atrial tachyarrhythmia was seen in 4 (12.9%) patients with cardiac injury while only seen in 1 (0.9%) patient without documented cardiac injury. Ventricular arrhythmia developed in 1 (3.2%) patient in the cardiac injury group vs. 2 (1.8) patients in non-cardiac group. Bradyarrhythmia occurred in 3 (9.7%) in cardiac injury group compared with 6 (5.4%) in non-cardiac injury group. Out of patients who had cardiac injury, 2 patients (6.5%) presented with ST-elevation myocardial infraction.
The mortality rate was remarkably higher in patients with cardiac injury compared with those without cardiac injury (16 [53.3%] vs 8 [7.1%]; P < 0.00001).
Further analysis of mortality in this retrospective cohort revealed that, out of 143 patients included in this study, 3.62% (3 of 83) with normal hs-TNI levels without underlying CVD, 17.24% (5 of 29) with normal hs-TNI levels with underlying CVD, 63.6% (7 of 11) with elevated hs-TNI levels without underlying CVD and 45% (9 of 20) with elevated hs-TNI levels with underlying CVD died during hospitalization. Figure 2 illustrates these findings. Noteworthy, patients who have died in the elevated hs-TNI levels without underlying CVD group were younger compared with patients with elevated hs-TNI levels and underlying CVD; (age median 55.4 (33-76) vs 70.0 (53-80). Male gender was the predominant in the two groups with 1 female in each group. Interestingly, deceased patients in the elevated hs-TNI levels without underlying CVD group had higher Ddimer and LDH levels than deceased patients in the elevated hs-TNI level with underlying CVD group.
This study describes the association between myocardial injury, underlying CVD and mortality and unfavorable outcomes in patients with COVID-19.
Our study showed apparent association between cardiac injury and mortality in patients with COVID-19. Among 143 patients with COVID-19, 31 (21.7%) who had significantly higher mortality. In cardiac injury group, the median duration from illness onset to death was 17.8 [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] days; a figure that is comparable to published data in literature. Shi et al. [14] described in a retrospective cohort of patients with COVID-19 that 19.7% had cardiac injury with mortality rate of 51.2% compared with 4.5% in non-cardiac injury group. Notably, there was a clear trend towards increase mortality in patients with underlying CVD as evident by almost 5 folds increase in mortality between the group with normal troponin with or without CVD (3.6% vs 17.2%). It is also noteworthy that 82.8% of patients with underlying CVD but normal hs-TNI level had relatively better outcome compared with patients with elevated hs-TNI level but without underlying CVD (mortality: 17.2% vs 63.6%). These results suggest that myocardial biomarkers should be utilized for risk stratification in CVD patients who develop COVID-19 infection. In a retrospective study and similar to our findings, Guo et al. [17] reported that highest mortality and the shortest survival term was seen in patients with underlying CVD (defined as patients with cardiomyopathy, coronary artery disease, or hypertension) and elevated hs-TNI level. The exact reasons why patients with underlying CVD are more vulnerable to more severe COVID-19 disease remain unclear. However, few theories have been postulated including direct viral invasion, systemic inflammatory response, destabilized coronary plaque and hypoxemia [6] .
Chronic cardiovascular disease, particularly coronary artery disease and heart failure, can worsen in viral infection setting as a result of the mismatch between higher metabolic demand caused by infection and decreased cardiac reserve; subsequently, these mechanisms, may lead to ischemia [18] . In addition to these and in relation to severe systemic inflammatory response, inflammatory activity within the coronary atherosclerotic plaques is intensified, rendering them vulnerable to rupture [19] . A recently published epidemiological study [20] has demonstrated that plasma levels of cytokines including interleukin (IL)-2, IL-7, IL-10, granulocyte-colony
with myocardial injury without myocardial injury Fig. 1 Complications stimulating factor, IgG-induced protein 10, macrophage inflammatory protein 1-alpha and tumor necrosis factor α were all elevated, above reference range, in patients with COVID-19 who were admitted to the intensive care unit. This exaggerated cytokine response by type 1 and type 2 helper T cells resulting in a cytokine storm can contribute to cardiac injury [20] .
Our study did also show that there may be other prognostic factors that can predict poor outcome which include D-dimer and LDH levels on admission. D-dimer, a product of lysis of cross-linked fibrin, is an important marker of thrombosis or coagulation activation [21] . Average D-dimer level on admission in those who died in the elevated hs-TNI level without underlying CVD group (mortality; 63.6%) was around four folds (13.3 vs 3.3 μg/mL) the average in the elevated hs-TNI level with underlying CVD (mortality; 45%). A retrospective cohort study has found that among 343 patients with COVID-19, mortality in patients with D-dimer levels ≥2.0 μg/mL was higher than that in patients with D-dimer levels < 2.0 μg/mL (P < .001) [22] . LDH could be another important prognostic marker. The mean LDH in our study in patients with elevated hs-TNI level without underlying CVD was numerically higher at 1366 compared with 659 in patients with elevated hs-TNI level but without underlying CVD. LDH is essential for glucose metabolism, particularly in conversion of pyruvate to lactate. LDH exists in all human cells, particularly in cardiac and liver cells, and Its secretion is triggered by cell membrane necrosis [23, 24] . A multi-center study which included more than a thousand patients with COVID-19 patients has linked COVID-19 severity to elevated LDH levels [25] . LDH level also correlated to CT scans findings of severe pneumonia [26] .
Patients with cardiac injury were more prone for complications such as (ARDS, 87% vs 42.9%, P < 0.00001, acute kidney injury, 67.7% vs 11.6%, %, P < 0.00001, drop in hemoglobin or Anemia, 38.7% vs 3.6%, P < 0.00001). 48.4% of cardiac injury patients required renal replacement therapy compared with 3.6% in non-cardiac injury group (P < 0.00001). Our figures are significantly higher than these reported from similar studies. For instance, Shi et al. [14] found in their study that 58.5% of cardiac injury patients developed ARDS, 8.5% had AKI, while anemia developed in 4.9% of the cardiac injury patients. While our report of tendency to unfavorable clinical course and outcome remained consistent with findings in literature, significantly worse outcome can be explained by one of the limitation of our study as one of the designated study sites was required to receive patient with severe COVID 19 mostly.
Since patients with underlying CVD are more likely to experience adverse events in setting of myocardial injury and COVID-19 including higher risk of death, it is essential to triage patients with COVID-19 according to the presence of underlying CVD and evidence of myocardial injury for prioritized treatment and possible special treatment strategies consideration.
Relatively small number of enrolled patients can affect statistical reflections. However, in view of the nature of emerging pandemic like COVID-19 and the fact that patients with CVD are among the highest risk and utilization of healthcare systems globally, it was still feasible to report our findings at this time. Larger cohort study is needed to verify our conclusions. One of the designated study sites received severe cases; however, we think that we were able to clearly find that myocardial injury remains an independent risk factor in setting of COVID-19 and our findings are consistent with the growing evidence.
Myocardial injury is not uncommon among patients admitted with COVID 19 infection and has significant association to higher mortality and poor outcomes. Further studies and meta- analysis are needed to replicate this finding and characterize further other predictive variables.
|
To the Editors:
The coronavirus disease 2019 (COVID-19) outbreak originated in Wuhan, China, in December 2019. It has now affected 197 countries. By 27 April 2020, it was reported that 2 878 196 individuals were infected, with 6.1% of patients diagnosed with severe illness, and more than 198 000 deaths worldwide. 1 Early identification and prediction of acute exacerbation is of great importance for the management of COVID-19. However, a serum biomarker for prognosis of COVID-19 is currently lacking.
Serum amyloid A (SAA) is commonly elevated in the acute phase of inflammatory diseases. 2 We describe levels of serum SAA, C-reactive protein (CRP) and procalcitonin (PCT) in patients with COVID-19 during their hospital admission.
We recruited patients with COVID-19 from 20 January to 10 March 2020, and detected serum markers (SAA, CRP and PCT) within the first day of hospitalization. This study was approved by the Hospital Ethics Review Committee (Ethics No 20201134) and the patients' informed consent was exempted. The outcomes of these patients were recorded as either improved and discharged or acutely exacerbated, according to oxygenation status (oxygen saturation < 93% and arterial partial pressure of oxygen/oxygen concentration ≤ 300 mm Hg) and chest radiograph (>50% lesions progression within 24-48 h in pulmonary imaging). Logistic regression model and receiver operating characteristic (ROC) curve were analysed to investigate the possible roles of SAA, CRP and PCT in prognosis prediction of COVID-19.
The current observational study enrolled 118 patients with COVID-19 (64 males) whose average age was (mean AE SD) 49.55 AE 15.95 years and the time of illness onset ranged from 1 to 14 days. On admission, 16 cases were diagnosed as severe COVID-19 and the remaining 102 patients were identified as ordinary cases, who received further medical surveillance to investigate the prognosis. The levels of SAA, CRP and PCT were markedly elevated in patients with severe illness compared with ordinary cases (SAA (mean AE SD): 40.42 AE 52.62 vs 198.32 AE 55.12 mg/L, P < 0.001; CRP (mean AE SD): 16.55 AE 10.99 vs 46.52 AE 35.21 mg/L, P < 0.001; PCT (mean AE SD): 0.051 AE 0.041 vs 0.125 AE 0.148 ng/mL, P < 0.001) (Fig. 1) . Furthermore, all 102 ordinary patients received antiviral therapy (Arbidol, Suzhou, China), of which 71 patients recovered and were discharged, but 31 cases underwent acute exacerbation. Logistic regression showed that SAA, but not CRP or PCT, could serve as an independent predictive factor of disease prognosis (OR: 1.031, 95% CI: 1.004-1.060, P < 0.05). ROC analysis demonstrated that SAA conferred greater diagnostic value than CRP and PCT in predicting disease progression (area under the curve (AUC): 0.9683 vs 0.8089 and 0.7684, both P < 0.05) (Fig. 2) .
Primary inflammation, driven by rapid viral replication and release of potent pro-inflammatory cytokines, occurs at the early stage of 2019-novel coronavirus (nCoV) infection. 3 The pulmonary infiltrate and diffuse alveolar damage in COVID-19, validated by autopsy study, 4 could potentiate further secretion of a variety of inflammatory cytokines. 5 Overall, we found that SAA and CRP were notably elevated in patients with COVID-19 at the start of their hospital stay, although patients had mild respiratory symptoms with focal pulmonary infiltrates. SAA could be an independent predictive factor of severe COVID-19, with an accuracy of 89.1% in predicting acute exacerbation (cut-off value: 122.9), Further evidence needs to be collected to confirm the possible correlation between SAA and the severity of outcome in COVID-19. Figure 1 The level of inflammatory indicators in patients with COVID-19 on admission ( , will worsen; , has worsened; , remain mild). COVID-19, coronavirus disease 2019; CRP, Creactive protein; PCT, procalcitonin; SAA, serum amyloid A.
|
coronavirus case numbers. Given what we know about European and Asian cases, we integrate these countries into our discussion in order to suggest appropriate NPIs for Canadian federal and provincial policymakers. Based on these models and predictions, we also question the reluctance of Canadian policy makers to learn from jurisdictions such as Taiwan and Hong Kong, where the COVID-19 outbreak appears to have been brought under control. 6 These countries have also extensively adopted face masks for the wider public, which along with hand hygiene are demonstrated to be effective against viral spread. 7, 8
.
In order to predict the trajectory of COVID-19 in Canada, we fit an exponential model to data on COVID-19 cases, in line with prior literature, which suggests that an epidemic's early stages can be exponential. 9 This fit is conducted via non-linear least squares estimation using the statistical package R. In particular, consider the following exponential model:
where y is the number of confirmed COVID-19 cases, and X is the number of days that the region has had the novel coronavirus. A and r are parameter values that we use non-linear least squares regression to fit. 2 . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted March 27, 2020. .
The number of those infected with COVID-19 has been published daily since January 25, 2020. Based on these data from government sources, we can calculate the exponential fit for these cases, as seen in Appendix, Figure 1 . The parameter values from this fit are shown in Table 1 . For policymakers to get a sense of how quickly the number of cases can grow across time, consider Appendix Figure 2 . As an illustrative example, consider the following. On March 20, 2020, there were 1,087 cases of SARS-CoV-2 in Canada. One week later, on March 27, 2020, the amount of cases is expected to more than quintuple to over 5,700, based on the exponential model. A week after that, on April 3, 2020, the number of cases is expected to reach 30,131, if current trends do not abate. These numbers will pose significant threats to ICU bed capacity, and the ability of hospitals to provide front-line medical staff with maximum personal protective equipment (PPE). Canada has ten provinces, four of which seem to be driving this exponential trend: British Columbia, Ontario, Alberta, and Quebec. As of March 20, 2020, 92 percent of Canada's COVID-19 cases come from these four provinces. These four provinces also contain 86 percent of Canada's population, and contain a number of other factors that make them more susceptible to disease transmission: densely populated urban areas, international and national travel hubs, and immigration. The same non-linear least squares method was applied to each individual province's number of COVID-19 case numbers, with results shown in Appendix Figure 3 (British Columbia), Figure 4 (Ontario), Figure 5 (Alberta), and Figure 6 (Quebec). In each of these provinces, an exponential fit appears to model the data well, strongly suggesting good prediction.
As with any discussion of exponential virus growth, it is difficult to predict exactly when the peak of the outbreak will take place in Canada. However our models are consistent with other models of exponential growth, including those from Italy where the exponential growth pattern continues unabated. 5 Countries in Asia, including Taiwan, Singapore, and Hong Kong, have rapidly brought the COVID-19 outbreak under control, and have important lessons to teach European and North American nations struggling with exponential growth rates. Both these nations have used a multitude of early, aggressive interventions including rapid border controls, social distancing, the innovative and population-wide use of smart phone technology and the ubiquitous wearing of surgical masks in healthcare settings as well as the general population. 10 Apart from the late and uncertain adoption of social distancing, Canada has not been willing or able to learn from these countries' experience. In particular, there is not yet any attempt to mobilize smart phone technology or use face masks for all healthcare workers. Face masks have been shown to reduce infection rates in influenza epidemics and WHO officials have been shown in news media to be wearing face masks whilst addressing news 3 . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
The copyright holder for this preprint this version posted March 27, 2020. . conferences. Smart phone technology adapted to Canadian needs would involve the creation of a national-level app which can be downloaded and used to track COVID-19 cases in real time, provide uniform updates on public policy and announcements; and make simple, consistent messaging available on a national scale. The ubiquitous use of face masks, at least in healthcare settings is a rapid adoption NPI which can have major impacts on COVID-19 related morbidity and mortality in Canada and elsewhere. 11 Policymakers in Canada should also implement a rapid roll out of policies that harness the power of smart phone technology [12] [13] [14] [15] and the wearing of face masks by the public. Unfortunately, the messaging in Canada has been counterproductive and has included various versions of "face masks don't work unless properly used." This is the same as claiming that hand washing does not work, unless properly carried out. Both Taiwan and Hong Kong function as forms of democracies, not unlike Canada and the United States. It should therefore be possible to move with deliberate and urgent speed to implement policies that have been shown to work in these Asian countries. Mandating healthcare workers to wear a mask in work places is a good start. Failure to make bold and rapid moves based on the successful measures that have been implemented in Taiwan and Hong Kong is likely to prove very costly in terms of overloaded ICU resources, public mistrust of health officials and an exponential increase in morbidity and mortality, the latter of which were observed in countries like Italy.
. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review)
The copyright holder for this preprint this version posted March 27, 2020. . https://doi.org/10.1101/2020.03.21.20040667 doi: medRxiv preprint
|
I nfl uenza A and B viruses are major pathogens that represent a threat to public health with subsequent economic losses worldwide (1) . Vaccination is the primary method for prevention; antiviral drugs are used mainly for prophylaxis and therapy. Currently, 2 classes of drugs, matrix 2 (M2) blockers and neuraminidase inhibitors (NAIs) are available, but M2 blockers such as amantadine and rimantadine are not commonly used because of the rapid generation of resistance and lack of effi cacy against infl uenza B virus (2) (3) (4) . The NAIs zanamivir and oseltamivir are widely used because of effects against infl uenza A and B viruses and a low frequency of resistance. NAI virus surveillance studies by several groups have demonstrated that <1% of viruses tested show naturally occurring resistance to oseltamivir as of 2007 (5) (6) (7) (8) (9) (10) , indicating limited human-to-human transmission of these viruses.
At the beginning of the 2007-08 infl uenza season, however, detection of a substantially increased number of oseltamivir-resistant infl uenza viruses A (H1N1) (ORVs) was reported, mainly in countries in Europe where the prevalence varies, with the highest levels in Norway (67%) and France (47%) (11) (12) (13) (14) . These viruses showed a specifi c NA mutation with a histidine-to-tyrosine substitution at the aa 275 position (N1 numbering, H275Y), conferring highlevel resistance to oseltamivir. Most of these ORVs were isolated from NAI-untreated patients and retained similar ability of human-to-human transmission to oseltamivirsensitive infl uenza viruses A (H1N1) (OSVs) (10, 15) . In response to public health concerns about ORVs, the World Health Organization (WHO) directed Global Infl uenza Surveillance Network laboratories to intensify NAI surveillance and announced regularly updated summaries of ORV data collected from each laboratory on its website (16) . This site reported that the global frequency increased from 16% (October 2007-March 2008) to 44% (April 2008-September 2008) to 95% (October 2008-January 2009), indicating that ORVs have spread rapidly around the world. Japan has the highest annual level of oseltamivir usage per capita in the world, comprising >70% of world consumption (10) . Such high use of oseltamivir has raised concerns about emergence of OSVs with increased resistance to this drug. Moreover, in Japan, 2 recent infl uenza seasons were dominated by infl uenza viruses A (H1N1) (Figure 1 ). If a high prevalence of ORVs is observed, primary selection of oseltamivir treatment for infl uenza patients should be reconsidered. Thus, monitoring ORVs is a serious public health issue.
To estimate the frequency of ORVs and characterize these viruses, we analyzed 1,734 clinical samples isolated from the 2007-08 season and 1,482 isolates from the 2008-09 season by NA sequencing and/or NAI inhibition assay. The total frequencies were 2.6% in the 2007-08 season and 99.7% in the 2008-09 season, indicating that ORVs increased dramatically in Japan.
The phylogenetic tree of NA and HA1 genes was constructed by neighbor-joining methods. The phylogenetic tree was described by representative ORVs and OSVs isolated from several prefectures in Japan. Sequence information for isolates from other countries was obtained from the Global Initiative on Sharing Avian Infl uenza Data and the Los Alamos National Laboratory database. All amino acid positions in the phylogenetic tree were described by N1 numbering.
The chemiluminescent NA inhibition assay was per-Emerging Infectious Diseases • www.cdc.gov/eid • Vol. 16 formed by using the NA Star Kit (Applied Biosystems, Tokyo, Japan) with slight modifi cations of the instructions provided by the manufacturer. The fi nal drug concentration ranged from 0.03 nmol/L to 6,500 nmol/L for oseltamivir and from 0.03 nmol/L to 12,500 nmol/L for zanamivir. Chemiluminescent light emission was measured by using an LB940 plate reader (Berthhold Technologies, Bad Wildbad, Germany). Drug concentrations required to inhibit NA activity by 50% (IC 50 ) were calculated by a 4-parameter method using MikroWin 2000 version 4 software (Mikrotek Laborsysteme GmbH, Overath, Germany).
The HI test was performed to evaluate the reactivity of ferret antiserum against 2008-09 vaccine strain A/ Brisbane/59/2009, as described by the WHO manual (17) . Antiserum was treated by receptor-destroying enzyme II (Denka Seiken, Tokyo, Japan) and adsorbed with packed turkey erythrocytes before testing to prevent nonspecifi c reaction. A 0.5% suspension of turkey erythrocytes was used for the HI test. Viruses with >8-fold reduced HI titer to the homologous titer of A/Brisbane/59/2009 antiserum were regarded as antigenic variants.
To determine the cutoff value between NAI-resistant (outlier) and -sensitive viruses, box-and-whisker plots were used. The cutoff value was defi ned as upper quartile + 5.0×interquartile range from the 25th to 75th percentile. In this study, ORVs with H275Y were excluded from the overall population for statistical analysis. Outliers were excluded from the calculation of mean values and standard deviations for IC 50 .
To estimate the frequency of infl uenza A (H1N1) ORVs in each prefecture of Japan, 1,734 isolates during the 2007-08 season and 1,482 isolates during the 2008-09 season were collected from all prefectures and examined by NA sequencing to detect the H275Y mutation in NA protein. In the 2007-08 season, 45 viruses possessing H275Y mutation (total frequency of ORVs 2.6%; Figure 2 , panel A) were observed in 10 prefectures, indicating that the frequency of ORVs was signifi cantly lower than that in countries in Europe and the United States (8, (11) (12) (13) (14) . In Tottori prefecture, however, 22 of 68 infl uenza viruses A (H1N1) tested possessed H275Y, showing a markedly higher frequency (32.4%) than that in other prefectures. In the 2008-09 season, however, ORVs were observed nationwide. Of 1,482 infl uenza viruses A (H1N1), 1,477 viruses possessed a H275Y mutation, for a total frequency of 99.7% ( Figure 2, , and the other ORVs are also genetically close to OSVs from Japan (online Appendix Figure) . These HA genes were also genetically identical or close together (online Appendix Figure) , suggesting that almost all ORVs from Japan with the Hawaii lineage are derived from OSVs from Japan. On the other hand, in the Northern-Eu lineage, OSV counterparts were not observed, but foreign ORVs genetically close to ORVs from Japan were observed. During the 2007-08 season, the NA gene of ORVs from Japan was close to that of ORVs isolated from countries in Europe (i.e., A/Paris/0341/2007 and A/ England/26/2008). During the 2008-09 season, the ORVs from Japan, which shared A189T on HA protein, were further divided into 4 subclades (C-1 to C-4) by common amino acid changes on HA and/or NA (online Appendix Figure) . ORVs from Japan in C-2 and C-3 were genetically close to the ORVs isolated from North America or Hawaii (e.g., A/Memphis/03/2008 and A/Hawaii/19/2008), whereas ORVs in C-1, representing most infl uenza A (H1N1) viruses from the 2008-09 season in Japan, and ORVs in C-4 were close to ORVs isolated from South Africa and Australia in the Southern Hemisphere (e.g., A/ Kenya/1432/2008 and A/Victoria/501/2008). All ORVs except C-3 were isolated before the emergence of ORVs from Japan in each subclade. These fi ndings suggest that ORVs from Japan within a Northern-Eu lineage would not have emerged domestically but instead may have been introduced from various countries.
Of the 364 viruses (306 isolates in the 2007-08 season and 58 isolates in the 2008-09 season) tested by NA inhibition assay, 101 possessed a H275Y substitution. With the NA inhibition assay, although precise IC 50 values were calculated from a normal sigmoid curve (Figure 3 , panels A and B), some viruses generated 2 types of unusual sigmoid curves (Figure 3 , panels C and D) resulting from the mixed population of NAI-resistant and -sensitive viruses, as previously reported (18) . Tentative IC 50 values were calculated from type A curves (Figure 3 , panel C) and included in overall statistical analysis, but values could not be calculated from type B curves (Figure 3, panel D) . Later viruses were regarded as resistant candidates.
In the NA inhibition assay for oseltamivir, OSVs showed a mean IC 50 ± SD of 0.10 ± 0.05 nmol/L (range 0.01-0.35 nmol/L), and ORVs had a mean ± SD IC 50 of 67.7 ± 44.1 nmol/L (range 26.1-239.2 nmol/L), showing a reduction of >260-fold in susceptibility to oseltamivir. One OSV identifi ed as a statistical outlier (cutoff IC 50 , >0.40 nmol/L; upper quartile + 5.0× interquartile range) showed a D151E substitution on the NA protein (Table 1 ).
In the NA inhibition assay for zanamivir, statistical analysis showed that 341 viruses were regarded as the zanamivir-sensitive viruses, with a mean ± SD IC 50 of 0.40 ± 0.26 nmol/L (range 0.01-1.92 nmol/L), and 16 viruses (10 ORVs and 6 OSVs) were identifi ed as outliers (cutoff IC 50 , >1.99 nmol/L) ( Table 1) (Table 1) . These data suggest that D151 changes have a substantial effect on sensitivity to zanamivir (and oseltamivir). Moreover, A/Tottori/44/2008 with H275Y and D151D/G substitutions conferred high-level resistance to both NAIs ( Figure 3, panels A and B) . However, a recent study reported that a D151E change was detected only after virus propagation in cell culture, but not in the original clinical specimen (19)
Tottori Prefecture is located in the western part of the main island of Japan. Comprising 19 cities and geographically divided into 3 areas, this prefecture has the lowest population in Japan (Figure 4, panel B) . Despite a low frequency of only 2.6% in Japan during 2007-08 season, an unexpectedly high frequency (32.4%) of ORVs was observed in Tottori prefecture (Figure 2, panel A) . ORVs from Tottori were collected from 4 cities in 2 areas with no systematic bias apparent in the sampling process ( Figure 4, panel B ).
Phylogenetic analysis of NA genes showed that these ORVs formed 3 subclades (Figure 4 to ORVs were observed in T-2, suggesting that ORVs in T-2 would be derived from OSVs in Tottori prefecture. A mapping study for ORVs showed that all ORVs in the Hawaii lineage were collected from Tottori city only, primarily at the end of January, whereas ORVs with the Northern-Eu lineage were collected from 4 cities, including Tottori city, during February and March. Genetically diverse ORVs belonging to T1-T3 were cocirculating only in Tottori city in the eastern area (Figure 4, panel B) . The Tottori case raised concern about the possibility that these Tottori ORVs could survive to become an origin ORV for the 2008-09 season in Japan. However, phylogenetic analysis showed that all ORVs isolated during the 2008-09 season were not genetically close to ORVs from Tottori (online Appendix Figure) . As a result, all ORVs from Tottori seemed to have been eliminated in the 2007-08 season, and ORVs that may have been introduced from other counties were circulating during 2008-09 in Japan.
Our study demonstrated that ORVs dramatically increased in Japan from the 2007-08 season (2.6%) to the 2008-09 season (99.7%). All tested ORVs showed a reduction of >260-fold in susceptibility to oseltamivir by NA inhibition assay. On the other hand, almost all ORVs remained sensitive to the other antiviral-drugs, e.g., zanamivir, and M2 inhibitors. HI testing suggested that the current vaccine, A/Brisbane/59/2008, would be effective against recent ORVs. In addition, recent studies have reported that symptoms and hospitalization rates of patients infected with ORVs are no different from those seen with OSVs (14, 20) . Figure 3 , panels C (Type A) and D (Type B). Although the viruses with D151D/G tend to generate both patterns from repeat testing for the same samples, type B was selected in this case. §Although A/TOTTORI/44/2008 showed mixed population of D151D/G, it tended to show a normal curve fit (Figure 3, panel B) . ¶The IC 50 values of most viruses with D151D/E tend to be higher than mean IC 50 values but do not exceed the cutoff value.
Japan has the largest per capita use of oseltamivir (>70%) in the world (10) . Because this use could cause effi cient selection of ORVs in individual patients, Japan might be the initial site of worldwide spread of ORVs. However, long-term NAI surveillance in Japan during 1996-2007 and recent surveillance showed a low frequency of NAI-resistant viruses for any strains and subtypes (10, 21, 22) , suggesting that transmissibility of ORVs selected by drug pressure was remarkably decreased. In addition, previous NAI surveillance (5) (6) (7) (8) (9) (10) and several animal studies (23) (24) (25) (26) also suggested that NAI-resistant viruses would become defective viruses with attenuated infectivity and transmissibility to human. In contrast, despite little NAI use, a high emergence of ORVs has been detected in several countries in Europe since November 2007. These ORVs had as effi cient transmissibility as OSVs in humanto-human transmission, resulting in worldwide spread in a short period of time. Although whether the initial ORV detected in Norway in the 2007-08 season appeared because of NAI drug pressure is unknown, those ORVs may have obtained amino acid changes on NA and/or other proteins to compensate for the defect, in addition to the H275Y substitution on the NA protein. Most ORVs belong genetically to the Northern-Eu lineage in clade 2B, suggesting that the gene constellation may contain a big advantage to retain infectivity and transmissibility.
An interesting question arose as to where the ORVs in Japan originated. In the Hawaii lineage, almost all ORVs in Japan would be derived from OSVs in Japan because the NA gene of ORVs was similar to OSV counterparts isolated at similar times or from similar regions (online Appendix Figure) . On the other hand, in the Northern-Eu lineage, ORVs in Japan would have been introduced from other countries. In 2007-08, almost all ORVs would be imported from countries in Europe. In 2008-09, the ORVs in C-1, which comprised most isolates in 2008-09, and ORVs in C-4 were genetically similar to ORVs isolated from the Southern Hemisphere. Because infl uenza activity in the Southern Hemisphere occurs half a year earlier than that in the Northern Hemisphere, most ORVs in Japan conceivably could have been imported from the Southern Hemisphere. ORVs in C-2 and C-3 were genetically similar to ORVs isolated in North America and Hawaii, but the collection month of ORVs in C-3 were similar to each other, suggesting that ORVs in C-3 might be derived from an unknown common origin ORV. The ORVs obtained during 2008-09 were not genetically similar to any ORVs isolated in Tot-tori during 2007-08, indicating that ORVs from Tottori had been eliminated and had not formed the origin ORVs for the 2008-09 season in Japan. As for A/Yokohama/91/2007 belonging to clade 2C, the patient from which this virus was isolated was known to have taken oseltamivir before sampling (22) , indicating that selective drug pressure in this person might have selected for this ORV.
In the NA inhibition assay for zanamivir, some viruses, including ORVs and OSVs, showed reduced sensitivity to zanamivir. NA sequencing of these viruses showed 2 types of amino acid changes. One virus, A/Tottori/16/2008 (OSV), possessed a Q136K substitution, which reportedly confers resistance to zanamivir (27, 28) . Conversely, most of the other viruses possessed D151 G/V/N. The amino acid changes D151 to N or E among subtype H1N1 viruses and to A, G, E, N, or V among H3N2 have been reported (7, 8, 19) , and viruses with D151 substitutions often exhibit reduced sensitivity to NAIs (8, 19, 29) . However, a recent study reported a possible role for cell culture in selecting these D151 variant viruses (19) . In the present study, D151 variations (D151G/E/N) also were not detected from available original clinical specimens (Table 1) , supporting the previous fi nding. We thus concluded that viruses with D151 variations would not have emerged naturally, and all ORVs would remain sensitive to zanamivir.
By sequencing of M2 gene, we confi rmed that almost all Japanese ORVs belonging to clade 2B retained sensitive genotype to M2 inhibitors, consistent with previously reports that recent clade 2B viruses are sensitive to M2 inhibitors, but clade 2C viruses are resistant (27) .
During the 2007-09 seasons, we also addressed NAI surveillance for A/H3N2 and type B circulating in Japan and identifi ed no viruses resistant to both NAIs. Conversely, in March and early April 2009, a new swine-origin infl uenza virus A (H1N1) (now known as pandemic [H1N1] 2009 virus) emerged in Mexico and the United States and spread rapidly to many countries, including Japan (30) (31) (32) (33) . In June 2009, detection of pandemic (H1N1) 2009 virus with H275Y on the NA protein was reported from Denmark, Hong Kong Special Administrative Region, People's Republic of China, and Japan, but all ORVs of pandemic (H1N1) 2009 virus emerged as sporadic cases with no evidence of effi cient human-to-human transmission (34) . Although oseltamivir remains a valuable drug for treatment of pandemic (H1N1) 2009, many ORVs were isolated after prophylaxis with a half dose of the drug. Therefore, prophylaxis with oseltamivir may not be recommended as
|
Nosocomial, or healthcare-associated infections (HAI) are a major burden to public health and the functioning of modern healthcare systems. In Canada, more than 200,000 patients acquire a HAI annually, and as a result, an estimated 8000 die [1] . Figures in the United States and Europe are comparable on a per-capita basis [2, 3] . The 2003 severe acute respiratory syndrome (SARS) outbreaks, and more recently of Middle East respiratory syndrome (MERS), highlight the major threat posed by HAIs, both within the hospital and for the wider community. Close contact between patients and/or healthcare workers (HCWs), and high concentrations of medicallyvulnerable populations, combined with physical movement between treatment areas, are factors that may facilitate HAI spread within health care institutions and the community.
Current infection prevention and control (IPC) measures focus on proper performance of both routine practices (e.g. hand and respiratory hygiene) and additional precautions (e.g. airborne, contact and droplet precautions) by all HCWs [4, 5] . Before patient contact, HCWs determine precautions to be taken based on their own situational risk assessment. However, heterogeneity of collective contacts among patients and HCWs are not specifically addressed in the current guidelines. Preliminary attempts to quantify mixing patterns and contact rates have been conducted among the general population on a large scale [6] [7] [8] [9] , or in non-healthcare settings [10] [11] [12] , but rates of HCW contacts within healthcare settings are postulated to be significantly higher and more heterogeneous than those within the general population [13] .
Studies using electronic medical records to examine spatial movement throughout the hospital provide information on only a small subset of hospital interactions. These studies capture patient movement as it pertains explicitly to the more complex clinical services they receive but fail to capture HCW social or casual movement, such as visits to the cafeteria or meeting rooms, or some types of clinical contact (e.g. a second staff member assisting with patient mobilization or bathing, or cross-covering a colleague on break) [14] [15] [16] . Contact patterns for HCWs have been examined using radio frequency identification (RFID) tags, mote-based sensors, and direct observation [17] [18] [19] [20] . These formats have suggested the potential for "super spreaders" in the hospital setting [20] , and notable differences in contact patterns between occupations [19] . Since these studies are currently only within a single ward or unit, they are limited in generalizability to a hospitalwide setting since they do not take contacts outside the study setting into account. In addition to room-level contacts, it is important to note the patterns of movement throughout the hospital. This may reveal locations that can more readily propagate infection spread during outbreak scenarios.
Understanding the movement and contact patterns of HCWs within hospital settings may allow for more targeted and effective infection control interventions. To address this knowledge gap, we conducted a crosssectional study of HCW in three major Canadian health care facilities to assess interpersonal contact patterns, movement throughout the facility, and demographic characteristics. These data can be used to develop a model that represents the heterogeneous contact patterns in the hospital setting. Additional questions on IPC practices were included to help parameterize future models of HAI reduction interventions.
Using architectural maps and floor plans, site-specific surveys were created for three urban university-affiliated tertiary care Canadian hospitals (hereafter called Hospital A, B and C). The data collection instruments were hardcopy paper booklets with information packages, containing guidelines and rationale for the study, and 1 online survey. Employees were invited to participate through personal invitations, email and posters. Surveys were also attached to employee paystubs on two separate occasions. The paper surveys were to be completed by HCWs and returned anonymously to a centrally-located drop box. Local study staff informed participants that survey completion was voluntary and anonymous. This project was funded by the Canadian Institutes of Health Research (CIHR) and called the CONNECT I study. Ethics review boards at all participating universities and hospitals approved the project.
An estimated 8100 staff working (or volunteering) in any of the three hospitals were eligible to participate (~4100,~2400, and~1600 in Hospitals A, B and C, respectively). Our pre-survey target for participation was 1000 or 12.5%. The survey identified 19 different HCW occupational categories including attending and resident physicians, nurses, technicians, support staff, undergraduate trainees and other hospital workers who have patient contact. For this publication, all occupational categories other than physicians, nurses, and administrative/support staff are grouped together as a fourth main category called "other HCWs" (hereafter, oHCW). These categories are summarized in Table 1 .
The surveys collected demographics, spatial movement, and patient interaction (contact) data, as well as selfreported compliance with IPC practices by both the survey respondent and his or her coworkers. Direct patient contact was defined as two or more individuals coming within 1 m (approximately 3 ft) of each other for 2 min or more. This proximity has long been proposed as a guideline for the range of transmission of infection by large droplets. At the time of conducting the survey, this proximity and duration were estimated to be necessary but not sufficient for respiratory infection transmission (in more recent guidance, 2 m is considered the radius for potential transmission [21] ). Indirect contact was defined as two or more individuals co-locating in the same room but not closer than 1 m.
For demographic analyses, differences between groups were assessed using Chi-square tests and analysis of variance (ANOVA).
Hospital floors were identified as predominantly patient-care area (PCA), predominantly non-patient-care area (non-PCA) and mixed (mPCA), by local study staff. Respondents reported the amount of time (in hours, or minutes) they averaged weekly in each location within their hospital. Detailed spatial locations such as pre-admission unit, day surgery unit, ambulatory internal medicine clinic or cafeteria, were identified in the questionnaire corresponding to each hospital. There were 251, 122, and 97 units in Hospitals A, B and C, respectively. The frequency of visits and mean reported hours were quantified for each location, and analysis of variance (ANOVA) was used to compare the groups. Tukey pairwise tests were used post-hoc to identify significant comparisons. Given the diversity and frequency of these small locations, it was necessary to aggregate the information that is simple to present and consistent across all sites. Since this paper concerns the structure of interpersonal HCW contacts and does not address the transmission dynamics of infection spread, we group these small locations to present the results for each actual hospital floor, as a spatial unit.
Infection prevention and control practices were assessed with questions about regular compliance with IPC precautions as well as through the use of three HCW-patient contact scenarios involving a patient who is diagnosed with a) respiratory tract infection (e.g. RSV) that is spread by droplets; b) active pulmonary tuberculosis, who has a productive cough; and c) varicella (chickenpox). Respondents were asked about the precautions they would takesuch as wearing surgical mask, N95 respirator, face or eye shield, gloves, gown, or goggles -in each of these three scenarios. Additionally, for scenario (a), they were asked to provide a response in a situation when they are within 1 m (3 ft) of the patient with respiratory tract infection. Also, for scenario (c), they were asked to provide a response assuming they had immunity to varicella (e.g., via childhood infection). Quantitative responses were measured on a 1 to 10 scale. Charting practices regarding the accurate recording of number of daily patient-HCW interactions were also assessed.
Two thousand eight hundred thirteen staff completed paper questionnaires while 235 completed electronic surveys. Three thousand forty-eight HCW participated (38%), which exceeded our target participation rate by three-fold.
The distributions of survey participation by hospital and occupation are summarized in Table 2 . Nurses were the occupational category with the highest aggregate response rate, although more administrative/support staff responded in Hospital A.
The median age of respondents across all sites was 42 years, 81% were female, and most (75%) worked in a patient-care area. More than one third (37%) of physicians worked in other healthcare facilities in addition to the study hospital (Table 3) .
Staff visited an average of 3.79, 3.69 and 3.88 floors in their respective healthcare facility per week, with a standard deviation of 2.63, 1.74 and 2.08. Physicians reported the highest number of locations visited per week, while nurses reported the lowest. The number of locations visited varied significantly depending on job category (Table 3) . Results from Tukey post-hoc analyses showed nurses visited significantly fewer locations compared to physicians, "other HCW" and admin/support (p = 0.002, p = 0.001, p < 0.001, respectively). Table 4 details the amount and type of contacts for each occupational group. Physicians reported the highest number of direct patient contacts (> 20 patients/day) but the lowest number of contacts with other HCWs, while nurses had the most extended (> 20 min) periods of Respondents were assigned to one of the four above categories based on their description in the free-text field provided direct patient contact. oHCWs had the most direct daily contact with other HCWs (Table 4) . Table 4 shows the number of contacts per occupational category. The first row in each pair corresponds to the number of respondents who answered questions labeled A1,. .., D1; the second row shows the percentage of these responses that satisfied the stated criteria (e.g., had direct contacts lasted more than 20 min).
To provide additional insight into the aggregate statistics presented in Tables 3 and 4 , Fig. 1 illustrates HCWs' time spent on each floor at one of the participating study hospitals. Each bar chart (row) in this figure corresponds to a separate floor in that hospital (labeled L1 -L17). Along the horizontal axis, 1512 thin bars represent 679 administrative/support staff (red), 561 nurses (blue), 104 physicians (cyan) and 168 oHCW (green), who responded to the survey. The vertical axis represents time in logarithmic scale; each bar's height reflects the time reported by that worker as having been spent on that floor. Thus, if a HCW reported spending a few-, up to 100 min on any single floor, the bar representing her/him can rise to the middle tick on the vertical axis; if hundreds of minutes, the bar may end in the middle segment of the y-axis; and finally, if few thousand minutes (up to a full work week), the bar may end on the upper segment of the y-axis. Also, in this figure, if a HCW spends time on more than one floor during the week, then they are represented by nonzero bars in the bar charts corresponding to those floors (and blank space in bar charts corresponding to other floors). There is a great variability in terms of the reported time spent, during a single, multiple, or routine visit(s), on each floor ranging from a few minutes to nearly a full work week (35 h/week = 2100 min/week).
Based on data shown in Fig. 1 , we generated a visualization of the bipartite network that captures HCW movement within a hospital setting (Fig. 2) . A bipartite network shows the relationship between two distinct classes of nodes, in this case hospital floors and HCWs.
Here the array of larger yellow nodes represents different floors in Hospital A, while all other nodes represent HCWs. An edge (black line) is drawn between a specific HCW and a location when the HCW reported visiting that location. HCW nodes are colored based on their occupational category.
The heterogeneity in the duration of time spent by a HCW in a spatial unit implies that the links connecting hospital floor and HCW do not have equal significance with respect to respiratory-borne infection transmission.
For low-to moderately contagious infections, the probability of transmission among contacts in close-proximity is generally considered to be proportional to the duration of contact for each pair of individuals [22] [23] [24] . To account for the duration, each link should be weighted according to the length of time spent in a spatial unit; the longer the duration, the higher the weight. Incorporating weighted edges in the network results in a gravity-centered network layout shown in Fig. 2 , where edges with higher weights ("stronger" edges) and their associated nodes are concentrated near the core, while edges with lower weights ("weaker" edges), and their associated nodes are pushed outward to the periphery of the network.
The irregular density of edges in Fig. 2 reveals considerable heterogeneity in both the number and duration of contacts in the study hospitals. For infectious pathogens whose probability of transmission is proportional to the duration of contact (directly between individuals, or indirectly between a person and a spatial unit), this may have a significant impact on the transmission pathways within a healthcare setting. The likelihood of igniting a HAI outbreak, or being infected during such an outbreak, is higher for the nodes that are part of the central cluster than the ones belonging to the dendritic branches in the periphery of the network.
Decomposing this network structure into its constituent occupational categories further exposes this heterogeneity. Fig. 3 shows the underlying weighted networks of the four occupational categories stratified around a central image that is a smaller replica of the full network (i.e., Fig. 2 ). While most nodes corresponding to participating Administration and Nurse categories occupy the peripheral branches of the weighted network, the majority of physicians are clustered in the centre (grey background area in all panels). For location analyses, both the number of visits per week, and the mean hours spent, significantly differed by location type (Table 5) . Public spaces had the most visits per week but the fewest mean hours spent (0.9 h). Inpatient settings had significantly more visits per week than outpatient settings.
The network in Fig. 2 can be divided into 3 disjoint networks based on hospital floors' classification as PCA, non-PCA, or mixed (Fig. 4) . The sub-network for PCA (top-left panel in Fig. 4) shows 3 different patterns: nodes (outer clusters) corresponding to HCW who visit only one floor; nodes (intermediate clusters) interact within two floors; and the remaining nodes (central core) representing individuals who visit several floors.
Comparatively speaking, floors with predominantly non-PCA areas (top right panel in Fig. 4 ) have higher between-floor traffic rate than PCA floors (top left panel). The highest between-floor HCW traffic occurs in mixed areas (lower panel in Fig. 4) .
Finally, as with Figs. 2 and 3, the sub-network corresponding to the PCA floors (top left panel in Fig. 4 ) may be stratified into the four occupational categories (Fig. 5) . All occupational categories include nodes that report movement between multiple PCA floors (i.e., the most Table 6 ). This movement may contribute to increasing the likelihood of an infectious transmission event within PCA floors.
Although respondents reported that they believed the majority of their HCW colleagues would comply with IPC guidelines (61.5% "mostly" comply, 31.5% "partially" comply), there was wide variability in reported use of personal protective equipment and only 81-87% expected compliance with handwashing after interacting with patients with communicable respiratory diseases (Table 7) . Additionally, most respondents believed that patient charts would inaccurately report single or multiple HCW-patient interactions.
The CONNECT I survey results presented here provide the most comprehensive picture of hospital-wide contact networks yet published. These insights provide evidence to support the development of novel network-based strategies for the prevention and control of HAI. Since the SARS outbreaks in 2003, there has been an emerging recognition of the complexity of hospital-based contact structures, and that this complexity varies by occupational type [25] . While prior studies have focused on individual hospital wards [18, 19, 26] , patient-to-patient contact [15] , or simulated/hypothetical patient-to-HCW contact [27] , we report on actual self-reported patterns of movement and contact of over 3000 HCW in three Canadian urban tertiary care university affiliated hospitals. The resulting facility-specific networks identify occupational categories and specific locations within each unique setting that have high and low contact rates. Such data that can be utilized to inform targeted and efficient IPC strategies. Contact and movement patterns of HCWs varied significantly by occupation. Although more nurses reported extended periods of direct patient contact, "other HCWs" (non-physician, non-nurse) had significantly more HCW contact per week than any other occupational category. In this paper, we aggregated HCW occupations into 4 main categories. We recognize HCW occupations such as respiratory therapists and personal care attendants may play a key role in spreading micro-organisms through physical contact, procedures such as intubation, or patient movement through the hospital. A higher resolution analysis of the survey data to address more refined questions may constitute the subject of future publications. The mobility of these occupations within a hospital may facilitate disease propagation compared to a more localized (within ward) movement, such as for nurses. Modeling the movement of these healthcare workers in the hospital setting may provide further insight into the propagation of diseases throughout the hospital.
We found that physicians, although mobile throughout the hospital, have a lower length of contact with other HCWs compared to any other occupational category, where a contact was defined as within 1 m of another individual for 2 min or more. This agrees with a study on one pediatric ward by Isella et al. [19] , which found physicians to have the least number of contacts of the occupations surveyed and where a contact was defined as within 1.5 m for 20 s or more. In contrast, Polgreen et al. [17] found that nurses, resident physicians and fellows had the highest number of HCW contacts of the job categories observed, where a contact was defined as within 0.9 m, but had no minimum time component (i.e., duration of contact). More recently, Mastrandrea [26] , studying a single infectious disease ward using radio frequency tracking devices, also found that physicians had the highest number of contacts with other health care workers, although this was within a total pool of only 22 HCWs. A study by Curtis et al. [16] , used movement patterns from electronic medical records to suggest that resident physicians and nurses had the most frequent HCW contacts. While this conflicts with other findings, their definition of a contact differs significantly and did not include contacts in areas where electronic medical records fail to capture. Despite their lower HCW contact rate in our study, physicians may still play a key role in infection-related events in the hospital. For example, significantly more physicians reported direct patient contacts of > 20 patients per day and were most likely to work in an additional but separate healthcare facility. This indicates that physicians may have a higher capacity to facilitate disease spread that propagates across wards and from hospital to hospital. Nurses reported the most extended contact with patients, and so may be at a higher risk of becoming infected by a patient. On the other hand, due to their more localized work space (typically a single ward), they may have a reduced role in the spreading of disease throughout the hospital population.
Location analyses showed that public spaces, including the cafeteria, lobby café, and coffee shops, were visited the most frequently per week but for a relatively shorter duration of time; this finding highlights a potential vulnerability of non-clinical spaces in healthcare facilities to promote infection spread for moderately-to highly transmissible pathogens. Given the vast overlap of HCWs, patients and the general public that may simultaneously visit these areas, disease spread could easily be facilitated between otherwise unconnected wards or units (or the community at large). Targeting these high-traffic areas with interventions such as hand-hygiene (washing stations or alcohol-based sanitizers), or mask distribution, or facilitating spatial separation may be effective in reaching a large and diverse subset of the hospital population. Inpatient locations were found to have a greater number of visits per week compared to outpatient locations. Inpatients settings have patients with a higher acuity of illness, and therefore a greater diversity of HCWs may be in contact with the patient. This suggests that there is an increased risk of disease spread in inpatient settings compared to outpatient settings.
Variable compliance in implementing and incorrect application of IPC precautions combined with the nonintuitively diverse structure of HCW contact patterns shown above, may lead to complex infection transmission dynamics pathways. Accounting for this complexity will require the use of quantitative complexity science techniques that go beyond basic statistical description of survey data.
As with any paper-based questionnaire, one of the limitations of this study was that the responses relied on an individual's recollection of movement throughout the hospital. To minimize the impact of this limitation, the respondents were given the choices of providing their contact history data based on a "typical week" of work, or "last week" of work, or "the last full week worked". After the paper questionnaires were distributed within participating hospitals, drop boxes were provided for several weeks at study sites to collect responses. We assume among those who selected "last week", some might have had a chance to assess their responses in "real time", while others relied on their immediate-past, or past memories (typical week).
In the questionnaire, we clarified direct contacts as those that occur "within 1 meter/3 feet", while indirect contacts are those that occur "within the same room but not closer than 1 meter/3 feet". Although based on these definitions, these two types of contact are mutually exclusive, it might have been difficult for respondents to strictly apply these definitions when recalling (immediate) past events. It is worth noting that in addition to the duration of contact, the type and intensity of contact are among factors to be considered, as physical contact might play an important role for some HCAIs [22, 23] . In this paper, our goal was not to construct direct contact networks between HCWs (i.e., all nodes in the network representing HCWs); rather, we presented co-location networks (i.e., bipartite HCW-location networks) derived from survey data. As such, we used each hospital floor as a single node in networks for ease of presentation. To establish formal inter-HCW contact networks, for outbreak and transmission dynamics analysis, future studies will utilize CONNECT I's more refined data corresponding to smaller spatial units (please see Additional file 1) than floor-aggregated data.
The network structures presented in this paper reveal a high degree of heterogeneity across HCW occupations and their roles on different wards/floors. These intricacies combined with heterogeneity in implementing IPC measures imply that designing policy requires employing network-based quantitative tools beyond that of basic aggregate statistics. These tools provide greater options and flexibility based on specific contact patterns that facilitate communicable disease transmission within hospital settings. Future research based on heterogeneity of movement patterns across the 3 hospitals will allow further tailoring of interventions for setting-specific control strategies.
The CONNECT I study provides insight into the movement and contact patterns of healthcare workers in the general hospital setting. These results can inform modeling initiatives to more accurately simulate the spread of HAI, and to optimize control strategies.
|
Filtering facepiece respirators (FFR) should be discarded after use for one work shift to control infection, 1 especially if they come into contact with airborne pathogens, such as Mycobacterium tuberculosis, 2 or influenza virus. 3, 4 During the severe acute respiratory syndrome (SARS) epidemic outbreak, consumers' demand for N95 respirators increased owing to their high collection efficiency. During the outbreak of Middle East respiratory syndrome (MERS) in Korea, pharmacies in South Korea sold many times more N95 FFRs than usual. However, these FFRs are sometimes reused, especially during a shortage or when their distribution is delayed. 5 Economic considerations may also apply. 6 The price of certified FFRs, such as National Institute for Occupational Safety and Health (NIOSH)-approved N95 FFRs, typically exceeds those of noncertified masks. Affordability considerations favor reuse.
National Institute for Occupational Safety and Health recommends practices for the extended use and limited reuse of NIOSHcertified N95 FFRs. 1 NIOSH defines reuse as the use of the same N95 respirator for multiple encounters with patients but removing it ("doffing") after each encounter. The respirator is stored between encounters to be put on again ("donned") before the next encounter with a patient. To prevent tuberculosis, the CDC recommends that a disposable respirator can be reused by the same worker as long as it maintains its physical integrity and its proper use provides protection (exposure reduction) consistent with the assigned protection factor for respirator of its class. 1 Furthermore, NIOSH requires that, between uses, used respirators should be hung in a designated storage area or kept in a clean, breathable container such as a paper bag.
To minimize potential cross-contamination, respirators are stored without touching each other and the use of the respirator is clearly identified. Storage containers should be disposed of or cleaned regularly. 1 The FDA defines three kinds of reuse: between patients with adequate reprocessing, reuse by the same person with adequate reprocessing/decontamination, and repeated use by the same person over a period with or without reprocessing. 7, 8 Before FFRs are reused, they may be decontaminated to control the growth of microorganisms on them. However, whether a decontaminated N95 FFR can be reused is an issue that requires detailed consideration. In some cases, the use of chemical disinfectants may require that an employer train workers on protecting themselves against chemical hazards and on complying with OSHA's Hazard Communication, 29 CFR 1910 .1200, and other standards. 9 Contaminated objects with porous surfaces that cannot be disinfected may have to be disposed of. 9 All personnel, clothing, equipment, and samples that leave a contaminated area (generally referred to as the Exclusion Zone) must be decontaminated to remove any harmful chemicals or infectious organisms that may be attached to them. 10 Decontamination methods (i) physically remove contaminants, (ii) inactivate contaminants by chemical detoxification or disinfection/sterilization, or (iii) remove contaminants by a combination of both physical and chemical methods. 10 NIOSH has published a series of research articles on mask decontamination. [11] [12] [13] [14] In selecting decontamination methods, both decontamination and protective capability are considered. 15 For example, ultraviolet germicidal irradiation (UVGI) and bleach reportedly do not significantly reduce the protective capability (penetration by contaminants) of FFRs. 13, 14 Bergman et al 13 tested many methods, involving UVGI and bleach, and found that FFRs that were treated with these two decontaminants and control samples exhibited expected filter aerosol penetration (<5%) and filter airflow resistance. Physical damage varied with treatment method. Further research is needed before any particular decontamination methods can be recommended.
Other chemical and energetic methods also have potential for decontaminating FFRs, 11, 12, 16 but few studies have addressed the elimination of viable microorganisms from FFRs. UVGI had been reported effectively to eliminate H5N1 17 or MS2 coliphages 18 from FFRs and was found not to affect drastically the filtration efficiency of FFRs. Related studies have not evaluated the efficiency with which decontamination methods destroy bacteria. Therefore, objective experimentally obtained information concerning the destruction of bacteria using various decontamination methods is required to support the reuse of FFRs.
The FDA has not cleared alcohol as the main active ingredient in liquid chemical sterilants or high-level disinfectants, because alcohol is rapidly bactericidal rather than bacteriostatic against vegetative forms of bacteria, and it does not destroy bacterial spores. 15, 19 However, ethanol or isopropanol can eliminate the electrostatic charges on filters which were used before the test of particle penetration through the electret masks. [20] [21] [22] In subtropical areas, such as Taiwan, temperatures and humidity are high all year round, favoring the growth of bacteria. Therefore, this study compares the cultivation of airborne Bacillus subtilis spores that are loaded on N95 FFRs after treatment by a commercially available decontamination method with that after storage at a constant worst-case temperature and relative humidity (RH) to elucidate the survival and reproduction of bacteria on N95 FFRs. The potential for N95 reuse during a shortage of epidemic-preventive supplies is evaluated, and recommendations concerning decontamination methods and FFR reuse criteria are made to increase public health.
OSHA has recommended that decontamination methods include chemical disinfection, irradiation, gas/vapor sterilization or steam sterilization, and dry heat sterilization. 10 To understand the biological effect of decontamination on FFRs, the following five decontamination methods were compared: low-temperature chemical decontamination using (i) ethanol and (ii) bleach, and physical decontamination using (iii) UVGI, (iv) an autoclave to provide moist heat, and (v) a traditional electric rice cooker (TERC), which was made in Taiwan, to provide dry heat. The first four methods are preferred methods for the disinfection or sterilization of patient-care medical devices. 15 TERC is frequently used in hospitals in Taiwan. 22 The filter quality of FFRs, including particle penetration and pressure drop, was originally published elsewhere. 22
The main test variable in this study is the survival of bacteria that were loaded on N95 FFRs that were decontaminated by various methods under worst-case temperature and humidity, which prevail when an FFR is placed in a zipper bag in a healthcare worker's pocket with the goal of preventing cross-contamination, 21 and touching of the respirator. 1, 23 In the experiment, B. subtilis spores were the tested microbial strain; a six-jet Collison nebulizer (BGI, Waltham, MA) sprayed the spores into a test system, shown in Figure 1 , where they were loaded on N95 FFRs by suction to simulate the respiratory
• The survival of bacteria of reclaimed National Institute for Occupational Safety and Health-certified N95 filtering facepiece respirators (FFRs) after decontamination is important, especially for healthcare workers.
• Safe respirator usage after decontamination using various methods improves infection control and protection against biohazards.
• The optimal dosages of decontamination methods are important for determining a comprehensive infection control strategy.
• Our work addresses the potential for cross-contamination of reused respirators with a view to overcoming FFR shortages and so to increase capacity for controlling future outbreaks. flow of workers during intensive activity. 21 The experimental FFR was an N95 FFR (8210, 3M, St. Paul, MN), certified by NIOSH. It was divided into six pieces, to which five decontamination methods were applied; they involved ethanol, bleach, UV, an autoclave, and a traditional electric rice cooker (TERC), made in Taiwan, without steam.
The treatment proceeded as follows.
• Ethanol: Ethanol with various concentrations and volumes was added to the center of the surface of the N95 FFR using a pipette, 21 the FFR was then dried in a petri dish that was placed in a biosafety cabinet (BSC) for 10 minutes.
• Bleach: A 0.4 mL volume of bleach with various concentrations (5.4% (w/w) as Cl 2 : original; 2.7%: one part bleach to one part of deionized water; 0.54%: one part bleach to nine parts of deionized water 13 ) was added to the center of the surface of the N95 FFR using a pipette, 21 the FFR was then dried in a petri dish in a BSC for 10 minutes.
• UV: An N95 FFR was placed 10 cm below a 6 W handheld UV lamp (model UVGL-58, VUP LLC, Upland, CA) that emitted a wavelength of 254 nm (UVC, 18.9 mW/cm 2 ) or 365 nm (UVA, 31.2 mW/cm 2 ). Both sides of each N95 FFR were exposed for different times -1, 2, 5, 10 and 20 minutes -in a BSC. The UV intensity was measured using a handheld laser power and energy meter (OPHIR NOVAII, model Nova II PD300-UV) and was reported as a mean of five measurements over a 10 × 10 mm aperture with a swivel mount and a removable filter.
• Autoclave: The N95 FFR was heated for 15 minutes at 121°C and 103 kPa.
• TERC: The N95 FFR was placed in an electric rice cooker for dry heating for 3 minutes (149-164°C, without added water). 22
Each N95 FFR was placed into the system ( Figure 1 ) for 30 minutes of bacterial bioaerosol sampling. The respiratory flow (85 L/min) of workers during high-intensity activities was used, 24 and the face velocity for the whole N95 FFR was calculated as 8.3 cm/s. The N95
FFRs were cut into pieces with a diameter of 45 mm. Each had an effective diameter of 40 mm and a filtration area of 12.6 cm 2 . The sampling flow rate of the pump was 6.3 L/min, which produced the desired face velocity. 17, 21, 25 Bacillus subtilis prototype strains (CCRC 12145, Taiwan Food Industry Research and Development Institute) were used to prepare an endospore suspension liquid for generating bacterial bioaerosols. 21 The suspension was centrifuged at 1917 g for 5 minutes. The supernatant was discarded and the pellet was resuspended in sterile distilled water. This washing process was repeated two times, 26 and spores were resuspended in approximately 55 mL of sterile distilled water to yield a uniform mixture which was poured into the Collison nebulizer. 21 The spores were aerosolized at a pressure of 25 psi when the dilution air flow rate was 80 L/min, as presented in Figure 1 . The stability of the bioaerosol concentration in the system was verified using an Andersen single-stage sampler (Andersen Inc., Atlanta, GA). 21 The aqueous packing density (α aq ) of the retained liquid decontaminants was modified that in a previous report, in which α aq was the volume fraction of the filter 27 and their relative survival (RS) was calculated as follows. 21, 28, 29 where C f is the number of CFUs after decontamination and C i is the number of CFUs before decontamination (Figure 2 ). were loaded on the N95 FFR. An RS of 89 ± 6% was obtained after spiking with 50% ethanol, and 73 ± 5% was obtained after spiking with 70% ethanol. The lowest RS of 68 ± 3% was obtained when the concentration of ethanol was 80%. The result that was obtained using 95% ethanol (RS = 73 ± 7%) was close to that obtained using 70% ethanol although the samples that were spiked with 95% ethanol sometimes yielded slightly higher values of RS than were obtained using the 80% ethanol samples. An RS of 59 ± 8% was obtained in 24 hours without decontamination. The 50%, 70%, 80%, and 95% ethanol-treated samples had RS values of 33 ± 8%, 22 ± 8%, 20 ± 2% and 26 ± 7% after 24 hours, respectively. Figure 4 shows the effect of 70% ethanol on the RS of B. subtilis spores. Just after spiking with ethanol, the RS was found to have declined from 100% to 68%-75%. When 0.4 mL (α aq = 0.23) of 70% ethanol was applied, the RS fell to 22% in 24 hours. The RS fell to 20% when 80% ethanol was used.
In the bleach decontamination test, no colony was recovered after 5.4%, 2.7% or 0.54% NaOCl was used, constituting no dilution, twofold, and 10-fold dilution, 9 respectively ( Figure 5 ). This study found that NaOCl, even when diluted 10-fold from standard bleach, had a strong decontamination effect, with a 100% bactericidal effect.
Similar results were achieved using UVC. No colony was recovered after exposure to UVC for as little as 5 minutes ( Figure 6 ). However, RS remained above 20% after 20 minutes of irradiation by UVA, exponentially decaying with increased exposure time ( Figure 6 ).
Decontamination using ethanol yielded higher RS of spores than the other four decontamination methods (Figure 7) . The other four both recommend the use of 70% alcohol. 15, 30 The US CDC also notes that the biocidal activity of alcohol diminishes sharply at dilutions of weaker than 50% (v/v), and the optimal bactericidal concentration is 15 From the result of quadratic polynomial regression, the lowest RS occurred at the 83% and 76% alcohol for the initial and whereas only approximately 20% of spores retained their cultivability after 20 minutes of irradiation by UVA (Figure 6 ), whose disinfection effect was comparable to that of ethanol ( Figure 4) . From the result of exponential decay regression, the half-life (the value of RS reduces to 50%) was 10 and 0.17 minutes for the UVA and UVC irradiation, respectively ( Figure 6 ). Although UVA could not decontaminate as effectively as UVC, it did have some decontaminating effect. This finding warrants further study.
The results in our study verify the biocidal efficacy of bleach (0.54% NaOCl), UVC, and an autoclave, which are well known means of sterilization. 15, 30 Interestingly, the TERC exhibited biocidal efficacy as a sterilizing device. In the WHO biosafety manual, 30 regarded as one of the most commonly used physical agents for decontamination against pathogens. "Dry" heat, which is non-corrosive, is applied to many items of laboratory-ware, which can withstand temperatures of at least 160°C for 2-4 hours. In this study, the TERC is used as a dry heating device and was found to exhibit a biocidal efficacy that reaches effective sterilization in 3 minutes. The results achieved using the TERC provide useful information regarding effective means for decontaminating and reusing FFRs.
Notably, when an N95 FFR is reused, the biocidal efficacy of the decontamination treatment, filter quality, fit factor (which is affected by physical damage to the frame or rubber strap), and toxic residual chemicals on FFR 14 must all be considered. For example, bleach can harm the wearer if not properly used to decontaminate an N95 FFR before reuse. 15 Safe disposal of spent bleach is important, and users may decide to neutralize the microbicidal activity of the bleach before disposal. Solutions can be neutralized by reaction with chemicals such as sodium bisulfite, or glycine. 15 Considering the potential health risks, the method of decontamination using bleach must be modified such as by the use of chemical methods for neutralizing residuals. 12 The RS is a function of decontamination and the biological characteristics of pathogens. The filter quality combines penetration and pressure drop and is affected by the physical characteristics of the FFR. However, this study focused on RS because it is a useful metric for quantifying sterilization or degree of disinfection. In summary, bleach, UVC, the autoclave, and the TERC provide effective sterilization. However, ethanol and UVA are ineffective and not cleared as high-level disinfectant by US FDA. 19 A better reuse FFR has a higher filter quality and a lower RS.
|
Since the 35-year-old woman of Chinese nationality who arrived in Wuhan city, China, on January 20 was found as the first confirmed case, the number of newly infected coronavirus infections in Korea (hereinafter referred to as COVID-19) is growing scary. The increase in the number of confirmed cases that had once entered a calm phase was triggered by the occurrence of the 31st confirmed patient on 18 February 2020. On February 26, about a week later, the number of confirmed patients surpassed 1000, and two days later, on February 28, the number doubled.
The World Health Organization (WHO), which continued its passive action as the worldwide spread of COVID-19 continued, starting with China and neighboring Asian countries, finally declared a "pandemic (global pandemic)" for COVID-19. "Pandemic" is a term that describes the sixth stage of the highest risk of the WHO's pandemic alert rating and refers to the global spread of an epidemic.
COVID-19 has infected millions of people around the world. Countries try to take effective measures to combat the pandemic; prevent the spread; decrease the risk of infection; restore the economy, protect the society, and preserve the environments; and make up for the losses caused during the global epidemic [1, 2] . In this kind of the pandemic situation, many nations have the capacity to cope with tangible and intangible risks resulting from the virus and take appropriate the prevention measures [2] . Since the medical system differs from country to country and government policies are not the same, it can be a meaningful study by tracking the changes in key keywords of the media for the Korean government's response policy. This study period is limited from January 20, 2020 to April 30, 2020.
Meanwhile, the rapid spread of COVID-19 and worldwide interest is leading to a surge in media reports related to COVID-19. As the public mainly acquires limited information on the media and recognizes the information by adding personal feelings and experiences in it, in order to understand the public's perception of COVID-19, it is necessary to look at domestic media reports related to it. Many citizens also have desires to get a lot of information from the various media to figure out the situation, and to protect their health. Information-seeking behaviors can decrease their anxiety triggered by uncertainty during the disaster [3, 4] . However, this paper focuses on keyword changes in newspapers, magazines, and broadcasts, excluding the Internet.
This study aims to find out how COVID-19-related news was handled in the domestic media. We used BIGKinds, which provides big data analysis technology and integrated data composed of news collected from various media companies such as newspapers and broadcasters. In particular, this paper focuses on the number of news features by period and by disaster and analysis of related words based on big data. Through this, we want to track the public interest in COVID-19 and the government's policy changes and seek ways to minimize the pandemic.
The official name of the disease is Coronavirus Infection-19 , which refers to a viral respiratory disease that occurred in Wuhan, Hubei Province, China, in December 2019. SARS (Severe Acute Respiratory Syndrome), which was first known as a respiratory epidemic of unknown cause, but was later determined to be spread by the pathogen in 2003, has been identified as a coronavirus, similar to the prevalence of MERS (Middle East Respiratory Syndrome) in 2012. The disease often attacks the respiratory system and develops into pneumonia after symptoms such as high fever, sore throat, cough, and difficulty breathing. The incubation period is about 3 to 7 days, but can last up to 14 days or more, and is known to have the highest initial propagation power with mild symptoms. COVID-19 is a virus created by mutation of an existing virus while parasitic on another host. It cannot be said to be a new virus, but it causes fear because there is no treatment or vaccine.
COVID-19 is sparking off an unknown serious international public health emergency [5] since it has shown a wide range of symptoms, from mild flu-like prodrome such as cold, sore throat, cough and fever, to more severe features such as pneumonia and breathing difficulties, and in some cases, death [6] . This shows that the world will have many difficulties in coping with COVID-19, and the vaccine development is not expected to be easy. In addition,
The occurrence of COVID-19 can be categorized as a typical global crisis pandemic, which is defined as a specific and surprising event, leading to high levels of uncertainty and serious threats [7] . According to Worldometer, a real-time international statistics website, as of 25 April 2020, at 10:56 a.m., the number of COVID-19 cumulative diagnoses worldwide was 2,830,051 (+111,352). The number of deaths was 197,245 (+6,591), and the recovered number was 798,772. As of 25 April 2020, Korea (31st) had 10,718 confirmed patients (+10) and 240 deaths (+0). Table 1 shows the spread process of COVID-19 until April 30, after the first confirmer in Korea came out on January 20, 2020. Figure 1 also shows the cumulative status of new confirmers and accumulated ones in Korea from January 1 to April 30, 2020. On February 29, 2020, the number of the confirmers reached 909 and from April 6, 2020, it fell to 50 or fewer. Since April 18, 2020, it fell to less than 20. This change in the number of confirmed persons will identify the Korean government's COVID-19 response policy. Thus, this study examines how the trend of media coverage has changed according to the spread process of COVID-19 and the related issues and may provide the implications.
20 January 2020
The first confirmed case of COVID-19 in Korea. A 36-year-old Chinese woman who arrived in Incheon on the 19th from Wuhan, China: Infectious disease crisis alert level up from "Attention" to "Caution".
24 January 20 The Second confirmed case, a 56-year-old Korean man who arrived at Gimpo Airport from Wuhan.
27 January 2020 Infectious disease crisis alert level up from "Caution" to "Warning": New coronavirus central accident control headquarters in operation.
28 January 2020 January 13~26 to begin a full survey of immigrants from Wuhan.
31 January 2020
The first returning Korean residents from Wuhan start of isolated life at temporary living facilities in Jincheon and Asan.
Restrictions on the entry to foreigners from China's Hubei province. Application of "special entry procedures" for immigrants from China. Suspended international visa-free entry to Jeju Island.
Second patient discharged. The first discharge of a domestic patient in 13 days of confirmed diagnosis.
The first opening of a life therapy center for mild patients in Daegu.
The cumulative number of confirmed cases in Korea is 5186, reaching the 5000 mark.
7 March 2020 Implementation of a five-part mask system that determines the date of purchase according to the year of birth.
8 March 2020
The first confirmed case of the Seoul Guro Call Center, the largest case of collective infection in the metropolitan area (56-year-old female).
World Health Organization (WHO) declared a global pandemic on COVID-19.
The declaration of special disaster zones in Daegu and parts of North Gyeongsang Province: "Special entry procedures" will be applied to those departing from France, Germany, Spain, Britain, and Netherlands and will be expanded to those entering Europe on the 16th.
Start of "high-intensity social distancing" such as restrictions on the operation of religious, entertainment, and indoor sports facilities: Mandatory examination of immigrants from Europe, mandatory self-isolation for two weeks.
27 March 2020 Mandatory self-isolation for two weeks from the United States.
1 April 2020 Self-isolation required for all immigrants for two weeks.
3 April 2020 The cumulative number of confirmed cases in Korea is 10,062 people, reaching the 10,000 mark.
According to the theory of media dependence [8] , during a serious social turmoil, the demand for related information and situational awareness is unusually high, and the media are generally recognized to best meet these needs [9] . Specifically, the public depended heavily on the media to get information on the reaction of organizations [10, 11] , and also on an exchange of opinions with others [12] .
In the last two decades, new media, such as social media and YouTube based on advanced Internet communication technology, have emerged, gradually replacing traditional media with their advantages such as convenience, time and cost [13] . People have mainly employed social media to communicate with the public and obtain crowd-sourced information [14] . Therefore, the new media can have an effective impact on informing people of the severity of COVID-19.
In modern society, data are rapidly increasing due to the development of numerous information technologies and the use of social media [15] , and the concept of big data began to appear to create new meaning and value from the vast amount of existing data [16] . Big data can be defined as information technology to predict the future by generating valuable information based on large amounts of data practically and efficiently. It has a characteristic that is different from existing data in terms of size, speed, and type of data [17] . It also means a large set of vast data sets exceeding the range that a typical database can store, manage, and analyze.
Newspaper data have been extensively studied in the field of media and information science. However, research methods that mainly read one-to-one and analyze the contents have been carried out rather than mechanical text mining. Recently, with the development of data mining tools, many papers analyzing newspaper articles and public opinion data have begun to be published [18] . Press big data refer to the result of extracting unstructured data into structured meta-data such as institutions, numbers, people, and quotations through image processing, natural language processing, and semantic network analysis. It has not only the unstructured nature of the text form itself but also the structured characteristics of media, genre, and date. Therefore, it is relatively easy to handle and manage than the completely unstructured data [19] . Therefore, many studies have been conducted using press big data in various academic fields. Through the big data analysis, we can examine various trends of the present and past in society. The data will be materials that can be researched in depth while tracking an issue and will have the function of finding a hidden context of information in the relationship network analysis [20] .
The BIGKinds service is a news reporting analysis service evolved from a news search service by KINDS (Korea Integrated News Database System). The KINDS service is a service that tracks various news such as broadcasts and major daily newspapers, which began in 1990 and provides search services. Existing KINDS service utilizes the big data analysis technology, which has recently been spotlighted on the vast amount of news data accumulated so far, while the novel news information provision and search system" BIGKinds" was established on April 19, 2016 and the service has grown and come so far [21] . The BIGKinds service consists of news collection, analysis, and storage systems, and as of May 2020, everyone can search and use 10 million cases published by fifty-four media outlets (eleven national newspapers, eight economic newspapers, twenty-eight local newspapers, five broadcast stations, and two specialty magazine).
In this study, through the "BIGKinds", a news big data analysis system from the Korea Press Foundation, media reports were classified and analyzed according to keywords. From 20 January 2020 to 30 April 2020, the search keyword covering new coronavirus infection, coronavirus infection, new coronavirus, coronavirus, and COVID-19 was "corona", the purpose of this study was to examine the changes and implications of issues related to the keywords.
The scope of the study was based on the current trend of COVID-19 and the government's response process. The period of response after COVID-19 was introduced into Korea (20 January 2020 to 17 February 2020) is called the "initial response stage". After the appearance of the 31st patient, the first case of infection in the Shincheonji Church in Daegu (from 18 February 2020), the period of response, which the government raised to "the highest level of infectious disease crisis warning", is classified as the "active response stage".
We analyzed twenty-two media companies including three broadcasting stations (KBS, MBC, and SBS) and news specialized channel (YTN), ten national newspapers (Kyunghyang Newspaper, Kookmin Daily, Dong-A Daily, Culture Daily, Seoul Newspaper, World Daily, Chosun Daily, JoongAng Daily, Hankyoreh, and Hankook Daily), and eight economic newspapers (Maeil Economy, Money Today, Seoul Economy, Asian Economy, Ajou Economy, Financial News, Korea Economy, and Herald Economy). The articles were classified into political, economic, social, cultural, regional, and IT science. Incident classification also included all media reports related to crime, accidents, disasters, and society.
The importance of big data is increasingly emphasized in various research fields, and it is used as a method to efficiently solve complex and diverse problems [22] . The BIGKinds program provides a function to calculate the frequency of words appearing in related text and convert them into a visualized image according to their weight and frequency. In this study, keyword trends and related words were analyzed. The trends were researched by counting the number of article occurrences and displaying graphs daily in the news searched by keywords. The data searched by the keyword are analyzed in the association with the topic rank algorithm and displayed in the word cloud form [23] .
The word cloud is a technique that extracts keywords from a document and visualizes them so that the nature of the document can be checked intuitively. After the extraction of keywords in the news set, by a researcher, and analysis of the correlation between them using the topic rank algorithm, they are visualized and displayed by their degree of relevance in a word cloud. The topic rank is an analysis method that extracts related keywords by calculating the occurrence frequency and importance of them expressing simultaneously with a specific keyword and is commonly used when educing key concepts [24] . Therefore, in this study, the media coverage of "corona" was visualized and the related topics were investigated to analyze the media report type, focusing on issues derived from the weight.
The subject of this study was to classify and analyze press reports according to keywords. From 20 January 2020 to 30 April 2020, the search keyword had a total of 7719 news features about "COVID-19", and 7391 cases were finally used for the analysis through filtering. The results are shown in Figure 2 .
In the case of the first confirmed patient in Korea on January 20, the Chinese woman entered Korea, the government raised the level of the infectious disease crisis warning from "interest" to "attention". Then, the government increased the level of warning of infectious diseases from "attention" to "border" on January 27 and started operating the Korea Centers for Disease Control and Prevention (KCDC) to handle new coronavirus infections. On February 3 (143 cases) and February 4 (142 cases), articles related to "corona" showed an increasing trend, and the government restricted the entry of foreigners from Hubei Province, China, and carried out the measurements of the "special entry procedure" for immigrants and temporary suspension of entry to Jeju-do visa.
In all, 1638 articles (excluding analysis: 96 cases) were extracted during the initial response phase (20 January 2020 to 17 February 2020), which is the time to respond after COVID-19 has spread to Korea. The results are shown in Figure 3 . And Table 2 shows the related keywords ranking and the word cloud in the initial response phase (January 20, 2020-February 17, 2020).
The number of COVID-19-related articles has exploded on the outbreak of domestic confirmed case. China, China Wuhan, Wuhan, and China Hubei, which are keywords for the place where COVID-19 occurs, were located at the top. As the uncertain information about COVID-19 spread along with the articles related to the government response (KCDC and President Moon Jae-in), keywords to identify the accurate sources of actual broadcast contents, fake news, and SNS, etc. were also ranked in the top 20. Fake news that tried to arouse interest by using the public's anxiety was shared with the Internet, causing confusion.
Moreover, interest in association with SARS and MERS, respiratory infections that had been prevalent in the past, was also high. The number of COVID-19 diagnosed and deceased patients passed that of SARS (severe acute respiratory syndrome) in 2003, at that time, more than 5327 people were confirmed with SARS in mainland China, of which 349 were killed. As it reflected this, SARS was ranked at the top of the list of major new coronavirus keywords. After the first cases of infection in the Shincheonji Church, the number of confirmed cases increased rapidly, mainly in Daegu and Gyeongbuk, and the government raised the alert on the infectious disease crisis to the highest level of severity.
The frequency of articles due to the outbreak of clusters of Shincheonji in Korea was the highest with 130 cases on February 24, and the World Health Organization's (WHO) pandemic declaration on March 11 saw 127 corona-related articles surge. Hundreds of confirmed cases occurred every day, and on March 3, the cumulative number of diagnoses in Korea reached 5000, and on April 3, 10,000. On March 22, high-intensity social distancing began, and on April 4, it was extended by two weeks. On April 19th, the intensity of social distancing was eased and extended until May 5th, and there was no daily death on April 24. Whenever the number of confirmed persons and the government's countermeasures were announced, more than 100 corona-related articles were issued, and related words increased rapidly from the initial response stage.
In all, 5753 cases (excluding analysis: 232 cases) were extracted in the active response phase (18 February 2020-30 April 2020) after the emergence of the 31st patient, the first case of infection in the Shincheonji Church in Daegu, which became the watershed in Korea. The results are shown in Figure 4 . And Table 3 shows the related keywords ranking and the word cloud in the aggressive response phase (February 18, 2020-April 30, 2020).
In the aggressive response phase, the number of keywords (Confirmed case and People) related to this was rising as the number of domestic confirmed cases surged. Furthermore, instead of keywords related to the origin of China, Daegu and Shincheonji, which influenced the spread in Korea, were ranked third and sixth, respectively. On March 11, the World Health Organization (WHO) declared a pandemic when signs of spread to Europe and signs of a pandemic were seen. Related keywords (Europe and Pandemic) were ranked in the top. As the secondary infection in the community began, keywords (Our Country, Korea, People and Local Government) also appeared mainly nationwide and in regions.
President Moon declared South Korea to be "the number one country that had coped well with the pandemic." Thanks to domestic and foreign media praise of the "South Korean model," the ruling Korea Democratic Party (and its satellite party) captured an unprecedented, near two-thirds majority of the National Assembly (180 of 300 seats) on 15 April 2020, and Moon's approval rating rose to as high as 70% [25] . This shows that Koreans have well accepted and followed the government's COVID-19 response policy. Since March 22, social distancing has been reinforced, and related keywords have increased. From March to the beginning of April, the outbreak spreading trend continued, but from April 5, when the social distancing was in place for over a month, the spreading rate slowed significantly. Companies participated in flexible work and video conferences, and citizens actively participated in social distancing by canceling meetings, taking food deliveries rather than eating out, and changing consumption behavior through online shopping. Thus, it is identified that reducing personal contacts is a central measure against the spreading of the novel coronavirus disease.
In this study, keyword trends were analyzed based on media reports related to COVID-19 using big data analysis. The implications obtained based on the research results are as follows. First, it was confirmed that the interest in the people or society through keywords was changed according to the situation of COVID-19. In the early stages of the outbreak and epidemic, it was confirmed that the interest in origins such as China and Wuhan was high, but as the spread of COVID-19 became full-scale, the interest in Shincheonji, Daegu, and domestic clusters increased.
Second, through the analysis of big data, it was found that COVID-19 affects the daily lives of individuals and elicits a high interest in the government response process. The degree of interest in the government response process was confirmed by the fact that "President Moon Jae-in" has emerged as a key keyword in the entire response phase. The fact that COVID-19 has consistently occupied a high frequency since its inception in Korea and has been expressed as a keyword indicates that our society is watching with great interest in the government's response to the disease. Third, from April 5 to April 30, it was identified that the government's high-intensity policy of social distancing suppressed the increase in the number of COVID-19 confirmed case in Korea. As the United States or Europe loosened the social distancing, the number of confirmed cases exploded. Thus, high-intensity social distancing can be one way to help minimize COVID-19 events.
This study is a basic study that checks the degree of COVID-19 s domestic news publication and related words. It is significant that the results of this study confirmed the interest in COVID-19 and the weight of related keywords even at the basic level. However, it is at the point when COVID-19 has not completely ended, and it has the limitation that it is a result of a partial search using the keyword "corona". In future studies, it is necessary to expand the trend of related keywords and terms related to COVID-19 and to develop the research method, such as identifying issues focused on the main word. It is also required to develop a big data analysis that confirms whether citizen awareness and government policies in response to COVID-19 are effective.
|
Scientists and public health officials have observed and mapped the geographical incidence of infectious diseases in relation to weather and climate for hundreds of years, and formally for at least a half century (Reisen, 2010) . In recent years, this work has accelerated significantly in the context of predictions for global climate change (IPCC, 2007b) . The development of and response to the IPCC 4 predictions of probable increases of vector-borne and diarrheal diseases during coming decades (IPCC, 2007a) has catalyzed a great deal of research, analysis, and speculation regarding the nature, magnitude, and extent of possible changes.
Numerous, and in some cases conflicting, predictions have been developed regarding the frequency, severity, and duration of epidemics that may emerge. With respect to the biogeographical focus of this issue, the central question is whether pathogens and parasites that are currently restricted to tropics and lower latitudes where the world's greatest biodiversity lies move toward poles (mostly north) and upward in altitude. Perhaps the more controversial topic today is the corollary to this question-how much will future ranges of diseases that that do move be constrained by socioeconomic conditions, including our capacity to control them (Hay et al., 2002; Patz et al., 2005; Lafferty, 2009) ?
Until very recently, climate projections for the coming decades have been limited to very coarse scale projections using Global Circulation Models (GCMs) at the level of continents and gross latitudinal and altitudinal changes (Fowler et al., 2007) . Downscaling such climate models is still in its infancy and only beginning to be useful for ecological and public health research at the regional, national, and sub-national scale for which most species distribution studies are conducted and validated (Beaumont et al., 2008) . As such, epidemiologists, ecologists, geographers, and others are grappling with the implications of global climate projections for infectious diseases using a variety of approaches. These include biological process models, statistical geospatial analysis of current and historical prevalence and incidence, and macroeconomic demographic models, all coupled with climate analyses and projections.
Although we are still at an early stage in our ability to make predictions for these extraordinarily complex phenomena, we are beginning to see some general patterns with regard to the important geophysical factors that govern biological basis for distribution change, the role of transport of disease, vectors and hosts, the biotic assemblages that influence establishment, and the socioeconomic conditions that constrain or enhance these dynamics.
Underlying most predictions for climate change effects on parasite and pathogen distribution are the physiological factors that regulate survivorship, reproduction, and transmission, and their interaction with extrinsic environmental changes associated with climate: precipitation, humidity, air and water temperature, principally. Under moderate greenhouse gas emission scenarios, GCM models project during the next century approximately 2-4°C average warming and greater precipitation at higher latitudes, with decreased precipitation at lower latitudes, and increases in heavy precipitation events in many regions (IPCC, 2007b) .
Biological process models are an important component of the discussion. Biological models estimate the responsiveness and thresholds of parasites, pathogens, and vectors to temperature and precipitation. These models can be used to estimate survivorship, transmissibility, and reproduction, including estimates of R0-the basic reproductive ratio-under different climate scenarios (see Lafferty, 2009 for review). Based largely on studies of vector and/or parasite development, warming and increases in humidity are predicted to open up new zones for malaria in Africa (Epstein et al., 1998; Martens, 1999) , parasitic nematodes in the Arctic (Kutz et al., 2005) , West Nile Virus (Reisen et al., 2006) , Lyme disease in North America (Ogden et al., 2008) , and Schistosomiasis in China (Zhou et al., 2008) . Historical analyses of climate patterns coupled with these biological process models provide additional explanatory power. For example, observed altitudinal increases in falciparum malaria in the East African highlands during the past 30 years have been associated with increasing temperatures and are consistent with models of anopheline mosquito vector development .
Process models also suggest, as one would expect, that in some cases, the same climate trends may reduce survivorship or transmission of pathogens, parasites, and their vectors at the warmer boundaries of their current ranges (Lafferty, 2009) . That is because future temperatures may exceed the biological thresholds for survival or transmission at these boundaries. More analysis of biological maxima and minima of pathogens, parasites, and their vectors, including their diurnal fluctuation thresholds, in concert with downscaled climate models or historical records will be helpful in this regard. Paaijmans et al. (2009) analyzed development time of Plasmodium falciparum parasites in relation to that of its vector under a range of diurnal temperature fluctuations. They found that at the higher mean temperatures (>26°C) associated with endemic malaria transmission in Africa effects of global warming on malaria dynamics may be overestimated. However, at lower mean temperatures associated with epidemic outbreaks (<20°C), such as those found in the Kenyan Highlands, real increases in R 0 with a couple degrees of warming are biologically possible, and consistent with epidemiological trends during the past 20 years in that region.
Similarly, changes in seasonality, especially intensity of rainfall, has been shown to be important in several diseases (Koelle et al., 2005; Altizer et al., 2006; Pascual et al., 2008) , particularly cholera dynamics. More frequent heavy rainfall events are predicted to occur in some areas, although the confidence levels and scaling of these future precipitation models may still be insufficient to support geographically specific predictions.
Whereas vector-borne diseases, and to a lesser extent diarrheal diseases, have been the principal focus of discussion, directly transmitted diseases, such as influenza and cold viruses, are likely to change as well. Recent laboratory work (Shaman and Kohn, 2009 ) on the persistence and transmission efficiency of influenza has shown the virus to be constrained markedly by absolute humidity. More recent epidemiological analysis support this finding (Shaman et al., 2010) and suggest that the marked seasonality and generally predictable geographical trajectory of seasonal influenza likely reflect, at least in part, climate variables.
Analyses of biological processes in relation to climaterelevant geophysical parameters establish the potential for climate change effects. However, they are insufficient to explain the highly complex dynamics of infectious disease transmission and establishment, including biological, climatological and socioeconomic considerations.
A frequently used alternative approach to predicting biogeographic spread is based on mapping the statistics of current and historical disease or vector prevalence or transmission coupled with climate projections, often based on GCMs (Rogers and Randolph, 2000; Mabaso et al., 2006; Ebi et al., 2007) . More recent statistical mapping (also called niche modeling) approaches have been able to use downscaled climate models (de la Rocque, 2008) in their projections. Several have predicted northward shifting zones of infection with modest impacts on overall extent of distribution (Rogers and Randolph, 2000; Hay et al., 2004; Nakazawa et al., 2007) . Although essential, many of these statistical mapping projects make their own set of assumptions. Particularly problematic are those that assume that today's public health and socioeconomic conditions, including surveillance and control capabilities, populations affected, and other ecological and economic conditions, will be similar to future conditions at new locations.
One significant and long-presumed constraint on longrange biogeographical spread of invasive species, including pathogens, parasites, vectors, and hosts, has been the role of spatial distance on the ability to arrive at a new hospitable site (MacArthur and Wilson, 1967; Reperant, 2009) . This is, of course, less of a constraint today and for the foreseeable future due to a diversity of globalized transport opportunities. Specifically, agriculture trade (Arzt et al., 2010) , livestock movement (Fevre et al., 2006) , wildlife trade (Karesh et al., 2005) , human travel and migration (Mac-Pherson et al., 2009) , ship ballast water (Aguirre-Macedo et al., 2008) , auto tire trade (Hawley et al., 1987) , and the global transport of desert dust containing microbiota regularly move pathogens, vectors, and infected hosts all around the Earth.
At small and intermediate scales, distance is still a very important determinant of disease movement. For example, Triatomine vector dispersal of Chagas disease in Argentina at a scale of meters to a few kilometers (Zu Dohna et al., 2009) or movement of African trypanosomiasis at a scale of tens to hundreds of kilometers via animal migrations (Batchelor et al., 2009) are clearly constrained by distance. Of course, if pathogens establish in a new zone and the process is allowed to continue incrementally year after year, diseases can and have moved across countries in a decade without the aid of ''modern transportation.'' However, at continental and intercontinental scales, movement of many diseases in a warming world is unlikely to have a linear relationship to distance because of the numerous means of long distance transport operating today.
Because infectious diseases are components of species networks, simply arriving in a new location may be insufficient for establishment and creation of endemic lifecycles. A number of studies have shown that emerging and reemerging diseases, particularly those in the tropics, are predominantly zoonotic and/or vector-borne diseases (Daszak et al., 2000; Woolhouse and Gowtage-Sequeria, 2005; Wolfe et al., 2007) . The principal ecological determinant for successful establishment for zoonotic and vector-borne disease success in a decadal time frame is probably a broad host range (Woolhouse and Gowtage-Sequeria, 2005; Daszak et al., 2000) . In recent years, a number of vector-borne viral diseases have migrated across or between continents through their ability to acquire new vectors (West Nile, Chikungunya) or utilize diverse and widely dispersed domestic (Rift Valley) or native (West Nile) animal hosts (Gould and Higgs, 2009 ). Importantly, the capacity to acquire new vector and host species often is facilitated by the capacity of the pathogen to evolve mutant strains that are efficiently acquired and transmitted by this new species.
Parasitic and infectious diseases with long, specialized, and/or especially complex life cycles involving multiple host species would logically be more challenged to establish themselves in new habitats by the need to find appropriate hosts and vectors. For example, the highly pathogenic Alveolar echinococcosis (fox tapeworm) has a northern distribution that is influenced by climate but constrained by landscape variables that determine host availability (Danson et al., 2003; Mas-Coma et al., 2008) . Egg survival is dependent on ground moisture levels, but intermediate stages survive in rodents, and its definitive hosts are typically foxes, coyotes, and dogs. At large spatial scales across China and inner Mongolia, A. echinococcus distribution has become focal due to the interaction of the requirements for a physical environment that is moist and cool enough to allow for egg survival and for the grassland landscapes that support both high densities of small mammals and canine hosts (Giraudoux et al., 2006) . High levels of forest cover or cleared agricultural landscapes reduce intermediate and/or definitive host densities.
It follows that for long-lived, host-specialized parasites, such as Echinococcus, it is likely that long time periods are required for the evolutionary changes required to enable dispersal to zones with insufficient host densities of its original species. However, some historical and paleontological studies of parasites suggest that host-switching behavior in parasites in response to ecological changes has been more common than generally thought and global climate changes are likely to produce broadening of parasite host ranges even for long-lived, nominally specialized parasites (Brooks and Hoberg, 2007) .
By contrast, water-borne diarrheal diseases often are predicted to increase with climate change. Biogeographically restrictive interaction of diarrhea-causing pathogens with new hosts or other pathogens has received little attention to date in the context of climate change. An exception is a series of papers by Colwell and workers on associations of Vibrio cholera with copepods in zooplankton (Gil et al., 2004; Constantin de Magny et al., 2008) .
For directly transmitted human diseases, the sorts of species network challenges may be even less of a constraint. Humans are most everywhere, highly mobile, and share most diseases readily. The principal biological barriers are host immune systems. Largely for these reasons we see rapid cross-national and continental movement and invasion by new viruses, such as seasonal influenza, SARS, and other coronaviruses. Although much of the climate change and disease discussion has focused on vector-borne diseases (Lafferty, 2009) , the ''biotic simplicity'' of directly transmitted diseases, in concert with the climate sensitivity of viruses as discussed earlier, might argue that we should pay more attention to them in climate change discussions.
Although there has been little serious discussion that climate change will not impact infectious disease distributions, a view expounded by a number of authors (Reiter, 2008; Zell et al., 2008; Lafferty, 2009 ) discounts the relative role of climate change compared with other major influences on human disease. These factors include efficacy of control measures, such as vector control, sanitation, vaccination, and chemotherapy (Hay et al., 2002) , as well as land use, urbanization, travel, wealth, and social behavior (Gubler et al., 2001; Sumilo et al., 2008) .
Supporting this view, statistical mapping analyses (Rogers and Randolph, 2000; Hay et al., 2004) indicate that the spatial extent and number of countries reporting endemic malaria has shrunk toward the tropics from a much wider range during the twentieth century. The authors attribute this principally to better control of vectors and disease. Further support can be seen in the rapid reemergence of tick-borne encephalitis in central and eastern European states after the collapse of the Soviet Union (Sumilo et al., 2008) . Statistical comparisons of results of extensive socioeconomic surveys and public health data across five former Soviet states indicate that various socioeconomic, land use, and public health programs can explain most of the variation across countries in the spread of tick-borne encephalitis during this period. In this analysis, climate variables were not assessed, but significant unemployment, reversion of agricultural lands, reduced pesticide use, reduced chemotherapy, and individual exposure through increased field collection of wild foods appear to have contributed to rapid increases during the early 1990s.
The interaction of climate change, infectious diseases, socioeconomic conditions, and behavior is going to be increasingly important in the coming years and represents an exciting and complex area for research (Patz et al., 2005) . For example, several authors have predicted mass migrations in response to climate change, particularly in response to worsening droughts in sub-Saharan Africa (Ramin and McMichael, 2009) (IPCC, 2007a) . If borne out, it is likely that many diseases will travel with migrants, and in some cases will establish new infectious species or strains with significant effects on receiver populations (MacPherson et al., 2009) .
Similarly, availability of clean water and hygienic behavior are principal determinants of diarrheal disease outbreaks, and clean water will be increasingly in short supply in much of Africa and parts of Southeast Asia in the coming decades due to overexploitation and climate change (IPCC, 2007a) . In an analysis of frequency all cause diarrhea in children younger than age 5 years in low-and middle-income countries, Lloyd et al. (2007) found that low mean monthly rainfall during the course of a year was significantly associated with disease. Surprisingly, neither temperature nor simple measures of socioeconomic development (GDP or GCP) were significant in this analysis of a disease type mostly managed by traditional public health measures, such as sanitation and education. Diarrheal diseases are produced by a diversity of bacterial, viral, and protozoan organisms. Strain diversity within a species is high locally (e.g., Jiang et al., 2000) around the world, and migration to new zones has significant implications for biologically naïve host populations.
Finally, given the importance and complexity of socioeconomic conditions on diseases, a variety of approaches and disciplines will be necessary to assess their effects on pathogen and parasite biogeography (Parkes et al., 2005) . Macroeconomic and demographic modeling may offer underexploited utility in this regard. A recent paper by Tang et al. (2009) is instructive; they set out to disentangle the causality of a ''climate-income trap model for human health'' by examining national life expectancy statistics (45 African, 113 non-African countries) in a crosssectional analysis. Using a series of regression models and variation partitioning analyses to separate pure and combined effects of income and climate their findings support the hypotheses that: 1) income can moderate the adverse effect of climate on life expectancy (LE); 2) as income grows climate will have less of an effect on LE; and 3) climate has a larger effect on LE in African countries than in non-African countries, and further that for African countries, climate has had a larger effect on LE than wealth.
Debates over the impact of climate change on infectious disease emergence and migration are certain to continue in the years to come. Accumulation and validation of biological process models, and ''ground truthing'' these with statistical and mapping approaches help provide us greater clarity and ensure that we are asking the right questions.
Macroeconomic and demographic approaches may help us address some of the big questions related to the role of disease management capability. Critical to a greater understanding of these dynamics and more predictive power is greater availability of downscaled General Circulation Models to scales that are more relevant to community and landscape biology and public health.
I have not addressed time scales in any depth here, but they are extremely important aspects of biogeographic changes in relation to climate. Longitudinal studies on the multi-decadal time frames that climate change presents are a major challenge. But the time required for evolution of host range, for population dynamics of hosts and parasites to stabilize, and for medical and public health systems to adapt will all have major roles in determining biogeographic patterns of disease, and will all operate relatively independently. Creative approaches are needed to address this problem.
Debates about whether socioeconomic conditions or climate will be more important are probably more polarized than is productive. Both are clearly important. Both of these complex suites of variables are interacting with each other, and may do so in different ways for different diseases in different places. However, understanding these disease forcing factors and their interactions with each other are crucial to our ability to predict and prevent the most severe health outcomes.
The public health and medical community has made enormous strides in control of infectious diseases during the past half-century. Widespread availability of antimicrobial drugs, vector-control systems, diagnostics, vaccines, and increasingly sophisticated predictive models represent a powerful set of tools to protect public health from emerging diseases. However, it would be unwise to discount the potential importance of climate change, land use impacts, and other ecological determinants of future disease.
Historically, ecological factors, including climate, have been extremely important in determining both the range and severity of diseases. These factors are now aggravated by the near disappearance of antimicrobial drug development in the industrial pharmaceutical sector, dramatic reductions in training and support for vector control scientists and implementers, persistent scientific and financial challenges in developing new drugs and vaccines for even the most widespread diseases (e.g., HIV, malaria, TB), and medical workforce shortages in the poorest parts of the world. As our scientific capabilities to describe and model diseases and disease processes expands rapidly, it will be important to ensure that we understand the changing nature of our planet and continually adapt our systems and invest in them to meet the challenges.
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.