Search results for: structural risk
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9772

Search results for: structural risk

292 Green Architecture from the Thawing Arctic: Reconstructing Traditions for Future Resilience

Authors: Nancy Mackin

Abstract:

Historically, architects from Aalto to Gaudi to Wright have looked to the architectural knowledge of long-resident peoples for forms and structural principles specifically adapted to the regional climate, geology, materials availability, and culture. In this research, structures traditionally built by Inuit peoples in a remote region of the Canadian high Arctic provides a folio of architectural ideas that are increasingly relevant during these times of escalating carbon emissions and climate change. ‘Green architecture from the Thawing Arctic’ researches, draws, models, and reconstructs traditional buildings of Inuit (Eskimo) peoples in three remote, often inaccessible Arctic communities. Structures verified in pre-contact oral history and early written history are first recorded in architectural drawings, then modeled and, with the participation of Inuit young people, local scientists, and Elders, reconstructed as emergency shelters. Three full-sized building types are constructed: a driftwood and turf-clad A-frame (spring/summer); a stone/bone/turf house with inwardly spiraling walls and a fan-shaped floor plan (autumn); and a parabolic/catenary arch-shaped dome from willow, turf, and skins (autumn/winter). Each reconstruction is filmed and featured in a short video. Communities found that the reconstructed buildings and the method of involving young people and Elders in the reconstructions have on-going usefulness, as follows: 1) The reconstructions provide emergency shelters, particularly needed as climate change worsens storms, floods, and freeze-thaw cycles and scientists and food harvesters who must work out of the land become stranded more frequently; 2) People from the communities re-learned from their Elders how to use materials from close at hand to construct impromptu shelters; 3) Forms from tradition, such as windbreaks at entrances and using levels to trap warmth within winter buildings, can be adapted and used in modern community buildings and housing; and 4) The project initiates much-needed educational and employment opportunities in the applied sciences (engineering and architecture), construction, and climate change monitoring, all offered in a culturally-responsive way. Elders, architects, scientists, and young people added innovations to the traditions as they worked, thereby suggesting new sustainable, culturally-meaningful building forms and materials combinations that can be used for modern buildings. Adding to the growing interest in bio-mimicry, participants looked at properties of Arctic and subarctic materials such as moss (insulation), shrub bark (waterproofing), and willow withes (parabolic and catenary arched forms). ‘Green Architecture from the Thawing Arctic’ demonstrates the effective, useful architectural oeuvre of a resilient northern people. The research parallels efforts elsewhere in the world to revitalize long-resident peoples’ architectural knowledge, in the interests of designing sustainable buildings that reflect culture, heritage, and identity.

Keywords: architectural culture and identity, climate change, forms from nature, Inuit architecture, locally sourced biodegradable materials, traditional architectural knowledge, traditional Inuit knowledge

Procedia PDF Downloads 506
291 Legume Grain as Alternative to Soya Bean Meal in Small Ruminant Diets

Authors: Abidi Sourour, Ben Salem Hichem, Zoghlemi Aziza, Mezni Mejid, Nasri Saida

Abstract:

In Tunisia, there is an urgent need to maintain food security by reversing soil degradation and improving crop and livestock productivity. Conservation Agriculture (CA) can be helpful in enhancing crop productivity and soil health. However, the demand for crop residues as animal feed are among the major constraints for the adoption of CA. Thus, the objective of this trial is to test the nutritional value of new forage mixture hays as alternative to cereal residues. Two tri-specific cereal-legume mixture were studied and compared to the classic Vetch-Oat one. They were implemented at farm level in four regions characterized by sub-humi climatic: V70-A15-T15 (Vetch70% - Oat15% -Triticale15%) installed in two sites (Zhir and safasaf), V60-A7-T33 (Vetch60% - Oat7% -Triticale33%) and V70-A30 (Vetch70%-Oat30%). Results revealed a significant variation between mixtures V70-A15-T15 installed at Safsafa, recorded the highest forage yield with 12t DM ha-1 than V60A7T33 and V70A30 installed, respectively in ksar cheikh and Fernana with 11.6 and 11.2.tMSha-1. The same mixture installed in Safsafa gave 22% less yields than the one installed in Safsafa. In fact, the month of March was dry in Z'hir. Moreover, these yields in DM can be comparable to those observed by Yucel and Avci (2009). The CP contents of the samples studied vary significantly between the mixtures (P<0.0003). V70-A15-T15 installed in Safsaf and V70A30 present higher contents of CP (respectively 14.4 and 13.7% DM) compared to the other mixtures. These contents are explained by the high proportion of vetch in the fourth mixture and by the low proportion of weeds in the second. In all cases, the hay produced from these mixtures is significantly richer in protein than that of oats in pure culture (Abdelraheem et al., 2019). The positive correlation between the CP content and the proportion of vetch explains this superior quality. The NDF and ADF contents were similar for all mixtures. These values were similar to those reported in the literature (Abidi and Benyoussef, 2019; Haj-Ayed and al., 2000). In general, the Land Equivalent Ratio (LER) was significantly greater than 1 for the vetch-oat-triticale mixture at Zhiir and Safsafa and also for the vetch-oat a at Fernana, proving that they are more productive in intercropping than in pure culture. For the Ksar Cheikh site, the LER value of the vetch-oat-triticale mixture is maintained at around 1. Proving the absence of the advantage of mixture culture compared to pure culture. This proves the massive presence of weeds interferes with the two partners of the mixture increases. The LER for the vetch-oat mixture reached its maximum in March 13 and decreases in April but remained above 1. This proves that the tutoring power of oats showed itself in a constant way until an advanced stage since the variety used is characterized by very thick stems, protecting it from the risk of lodging. These forages mixture present a promising option, a high nutritional quality that could reduce the use of concentrate and, therefore, the cost of feed. With such feed value, these mixtures allow good animal performance.

Keywords: soybean, lupine, vetch, lamb-ADG, meat

Procedia PDF Downloads 69
290 The Influence of Minority Stress on Depression among Thai Lesbian, Gay, Bisexual, and Transgender Adults

Authors: Priyoth Kittiteerasack, Alana Steffen, Alicia K. Matthews

Abstract:

Depression is a leading cause of the worldwide burden of disability and disease burden. Notably, lesbian, gay, bisexual, and transgender (LGBT) populations are more likely to be a high-risk group for depression compared to their heterosexual and cisgender counterparts. To date, little is known about the rates and predictors of depression among Thai LGBT populations. As such, the purpose of this study was to: 1) measure the prevalence of depression among a diverse sample of Thai LGBT adults and 2) determine the influence of minority stress variables (discrimination, victimization, internalized homophobia, and identity concealment), general stress (stress and loneliness), and coping strategies (problem-focused, avoidance, and seeking social support) on depression outcomes. This study was guided by the Minority Stress Model (MSM). The MSM posits that elevated rates of mental health problems among LGBT populations stem from increased exposures to social stigma due to their membership in a stigmatized minority group. Social stigma, including discrimination and violence, represents unique sources of stress for LGBT individuals and have a direct impact on mental health. This study was conducted as part of a larger descriptive study of mental health among Thai LGBT adults. Standardized measures consistent with the MSM were selected and translated into the Thai language by a panel of LGBT experts using the forward and backward translation technique. The psychometric properties of translated instruments were tested and acceptable (Cronbach’s alpha > .8 and Content Validity Index = 1). Study participants were recruited using convenience and snowball sampling methods. Self-administered survey data were collected via an online survey and via in-person data collection conducted at a leading Thai LGBT organization. Descriptive statistics and multivariate analyses using multiple linear regression models were conducted to analyze study data. The mean age of participants (n = 411) was 29.5 years (S.D. = 7.4). Participants were primarily male (90.5%), homosexual (79.3%), and cisgender (76.6%). The mean score for depression of study participant was 9.46 (SD = 8.43). Forty-three percent of LGBT participants reported clinically significant levels of depression as measured by the Beck Depression Inventory. In multivariate models, the combined influence of demographic, stress, coping, and minority stressors explained 47.2% of the variance in depression scores (F(16,367) = 20.48, p < .001). Minority stressors independently associated with depression included discrimination (β = .43, p < .01) victimization (β = 1.53, p < .05), and identity concealment (β = -.54, p < .05). In addition, stress (β = .81, p < .001), history of a chronic disease (β = 1.20, p < .05), and coping strategies (problem-focused coping β = -1.88, p < .01, seeking social support β = -1.12, p < .05, and avoidance coping β = 2.85, p < .001) predicted depression scores. The study outcomes emphasized that minority stressors uniquely contributed to depression levels among Thai LGBT participants over and above typical non-minority stressors. Study findings have important implications for nursing practice and the development of intervention research.

Keywords: depression, LGBT, minority stress, sexual and gender minority, Thailand

Procedia PDF Downloads 111
289 Effectiveness of Dry Needling with and without Ultrasound Guidance in Patients with Knee Osteoarthritis and Patellofemoral Pain Syndrome: A Systematic Review and Meta-Analysis

Authors: Johnson C. Y. Pang, Amy S. N. Fu, Ryan K. L. Lee, Allan C. L. Fu

Abstract:

Dry needling (DN) is one of the puncturing methods that involves the insertion of needles into the tender spots of the human body without the injection of any substance. DN has long been used to treat the patient with knee pain caused by knee osteoarthritis (KOA) and patellofemoral pain syndrome (PFPS), but the effectiveness is still inconsistent. This study aimed to conduct a systematic review and meta-analysis to assess the intervention methods and effects of DN with and without ultrasound guidance for treating pain and dysfunctions in people with KOA and PFPS. Design: This systematic review adhered to the PRISMA reporting guidelines. The registration number of the study protocol published in the PROSPERO database was CRD42021221419. Six electronic databases were searched manually through CINAHL Complete (1976-2020), Cochrane Library (1996-2020), EMBASE (1947-2020), Medline (1946-2020), PubMed (1966-2020), and Psychinfo (1806-2020) in November 2020. Randomized controlled trials (RCTs) and controlled clinical trials were included to examine the effects of DN on knee pain, including KOA and PFPS. The key concepts included were: DN, acupuncture, ultrasound guidance, KOA, and PFPS. Risk of bias assessment and qualitative analysis were conducted by two independent reviewers using the PEDro score. Results: Fourteen articles met the inclusion criteria, and eight of them were high-quality papers in accordance with the PEDro score. There were variations in the techniques of DN. These included the direction, depth of insertion, number of needles, duration of stay, needle manipulation, and the number of treatment sessions. Meta-analysis was conducted on eight articles. DN group showed positive short-term effects (from immediate after DN to less than 3 months) on pain reduction for both KOA and PFPS with the overall standardized mean difference (SMD) of -1.549 (95% CI=-0.588 to -2.511); with great heterogeneity (P=0.002, I²=96.3%). In subgroup analysis, DN demonstrated significant effects in pain reduction on PFPS (p < 0.001) that could not be found in subjects with KOA (P=0.302). At 3-month post-intervention, DN also induced significant pain reduction in both subjects with KOA and PFPS with an overall SMD of -0.916 (95% CI=-0.133 to -1.699, and great heterogeneity (P=0.022, I²=95.63%). Besides, DN induced significant short-term improvement in function with the overall SMD=6.069; 95% CI=8.595 to 3.544; with great heterogeneity (P<0.001, I²=98.56%) when analyzed was conducted on both KOA and PFPS groups. In subgroup analysis, only PFPS showed a positive result with SMD=6.089, P<0.001; while KOA showed statistically insignificant with P=0.198 in short-term effect. Similarly, at 3-month post-intervention, significant improvement in function after DN was found when the analysis was conducted in both groups with the overall SMD=5.840; 95% CI=9.252 to 2.428; with great heterogeneity (P<0.001, I²=99.1%), but only PFPS showed significant improvement in sub-group analysis (P=0.002, I²=99.1%). Conclusions: The application of DN in KOA and PFPS patients varies among practitioners. DN is effective in reducing pain and dysfunction at short-term and 3-month post-intervention in individuals with PFPS. To our best knowledge, no study has reported the effects of DN with ultrasound guidance on KOA and PFPS. The longer-term effects of DN on KOA and PFPS are waiting for further study.

Keywords: dry needling, knee osteoarthritis, patellofemoral pain syndrome, ultrasound guidance

Procedia PDF Downloads 117
288 Management of Hypoglycemia in Von Gierke’s Disease

Authors: Makda Aamir, Sood Aayushi, Syed Omar, Nihan Khuld, Iskander Peter, Ijaz Naeem, Sharma Nishant

Abstract:

Introduction:Glycogen Storage Disease Type-1 (GSD-1) is a rare phenomenon primarily affecting the liver and kidney. Excessive accumulation of glycogen and fat in liver, kidney, and intestinal mucosa is noted in patients with deficiency of Glucose-6-phosphatase deficiency. Patients with GSD-1 have a wide spectrum of symptoms, including hepatomegaly, hypoglycemia, lactic acidemia, hyperlipidemia, hyperuricemia, and growth retardation. Age of onset, rate of disease progression and its severity is variable in this disease.Case:An 18-year-old male with GSD-1a, Von Gierke’s disease, hyperuricemia, and hypertension presented to the hospital with nausea and vomiting. The patient followed an hourly cornstarch regimen during the day and overnight through infusion via a PEG tube. The complaints started at work, where he was unable to tolerate oral cornstarch. He washemodynamically stable on arrival. ABG showed pH 7.372, PaCO2 30.3, and PaO2 92.2. WBC 16.80, K+ 5.8, HCO3 13, BUN 28, Cr 2.2, Glucose 60, AST 115, ALT 128, Cholesterol 352, Triglycerides >1000, Uric Acid 10.6, Lactic Acid 11.8 which trended down to 8.0. CT abdomen showed hepatomegaly and fatty infiltration with the PEG tube in place.He was admitted to the ICU and started on D5NS for hypoglycemia and lactic acidosis. Per request by the patient’s pediatrician, he was transitioned to IV D10/0.45NS at 110mL/Hr to maintain blood glucose above 75 mg/L. Frequent accuchecks were done till he could tolerate his dietary regimen with cornstarch. Lactic acid downtrend to 2.9, and accuchecks ranged between 100-110. Cr improved to 1.3, and his home medications (Allopurinol and Lisinopril) were resumed. He was discharged in stable condition with plans for further genetic therapy work up.Discussion:Mainstay therapy for Von Gierke’s Disease is the prevention of metabolic derangements for which dietary and lifestyle changes are recommended. A low fructose and sucrose diet is recommended by limiting the intake of galactose and lactose to one serving per day. Hypoglycemia treatment in such patients is two-fold, utilizing both quick and stable release sources. Cornstarch has been one such therapy since the 1980s; its slow digestion provides a steady release of glucose over a longer period of time as compared with other sources of carbohydrates. Dosing guidelines vary from age to age and person to person, but it is highly recommended to check BG levels frequently to maintain a BG > 70 mg/dL. Associated high levels of triglycerides and cholesterol can be treated with statins, fibrates, etc. Conclusion:The management of hypoglycemia in GSD 1 disease presents various obstacles which could prove to be fatal. Due to the deficiency of G6P, treatment with a specialized hypoglycemic regimen is warranted. A D10 ½ NS infusion can be used to maintain blood sugar levels as well as correct metabolic or lactate imbalances. Infusion should be gradually weaned off after the patient can tolerate oral feeds as this can help prevent the risk of hypoglycemia and other derangements. Further research is needed in regards to these patients for more sustainable regimens.

Keywords: von gierke, glycogen storage disease, hypoglycemia, genetic disease

Procedia PDF Downloads 92
287 Conditionality of Aid as a Counterproductive Factor in Peacebuilding in the Afghan Context

Authors: Karimova Sitora Yuldashevna

Abstract:

The August 2021 resurgence of Taliban as a ruling force in Afghanistan once again challenged the global community into dealing with an unprecedentedly unlike-minded government. To express their disapproval of the new regime, Western governments and intergovernmental institutions have suspended their infrastructural projects and other forms of support. Moreover, the Afghan offshore reserves were frozen, and Afghanistan was disconnected from the international financial system, which impeded even independent aid agencies’ work. The already poor provision of aid was then further complicated with political conditionality. The purpose of this paper is to investigate the efficacy of conditional aid policy in the Afghan peacebuilding under Taliban rule and provide recommendations to international donors on further course of action. Arguing that conditionality of aid is a counterproductive factor in the peacebuilding process, this paper employs scholarly literature on peacebuilding alongside reports from International non-governmental organizations INGOs who operate directly in Afghanistan. The existing debate on peacebuilding in Afghanistan revolves around aid as a means of building democratic foundation for achieving peace on communal and national levels and why the previous attempts to do so were unsuccessful. This paper focuses on how to recalibrate the approach to aid provision and peacebuilding in the new reality. In the early 2000s, amid the weak Post-Cold War international will for a profound engagement in the conflict, humanitarian and development aid became the new means of achieving peace. Aid agencies provided resources directly to communities, minimizing the risk of local disputes. Through subsidizing education, governance reforms, and infrastructural projects, international aid accelerated school enrollment, introduced peace education, funded provincial council and parliamentary elections, and helped rebuild a conflict-torn country.When the Taliban seized power, the international community called on them to build an inclusive government based on respect for human rights, particularly girls’ and women’s schooling and work, as a condition to retain the aid flow. As the Taliban clearly failed to meet the demands, development aid was withdrawn. Some key United Nation agencies also refrained from collaborating with the de-facto authorities. However, contrary to the intended change in Talibs’ behavior, such a move has only led to further deprivation of those whom the donors strived to protect. This is because concern for civilians has always been the second priority for the warring parties. This paper consists of four parts. First, it describes the scope of the humanitarian crisis that began in Afghanistan in 2001. Second, it examines the previous peacebuilding attempts undertaken by the international community and the contribution that the international aid had in the peacebuilding process. Third, the paper describes the current regime and its relationships with the international donors. Finally, the paper concludes with recommendations for donors who would have to be more realistic and reconsider their priorities. While it is certainly not suggested that the Taliban regime is legitimized internationally, the crisis calls upon donors to be more flexible in collaborating with the de-facto authorities for the sake of the civilians.

Keywords: Afghanistan, international aid, donors, peacebuilding

Procedia PDF Downloads 75
286 Developing and Standardizing Individual Care Plan for Children in Conflict with Law in the State of Kerala

Authors: Kavitha Puthanveedu, Kasi Sekar, Preeti Jacob, Kavita Jangam

Abstract:

In India, The Juvenile Justice (Care and Protection of Children) Act, 2015, the law related to children alleged and found to be in conflict with law, proposes to address to the rehabilitation of children in conflict with law by catering to the basic rights by providing care and protection, development, treatment, and social re-integration. A major concern in addressing the issues of children in conflict with law in Kerala the southernmost state in India identified were: 1. Lack of psychological assessment for children in conflict with law, 2. Poor psychosocial intervention for children in conflict with law on bail, 3. Lack of psychosocial intervention or proper care and protection of CCL residing at observation and special home, 4. Lack convergence with systems related with mental health care. Aim: To develop individual care plan for children in conflict with law. Methodology: NIMHANS a premier Institute of Mental Health and Neurosciences, collaborated with Social Justice Department, Govt. of Kerala to address this issue by developing a participatory methodology to implement psychosocial care in the existing services by integrating the activities through multidisciplinary and multisectoral approach as per the Sec. 18 of JJAct 2015. Developing individual care plan: Key informant interviews, focus group discussion with multiple stakeholders consisting of legal officers, police, child protection officials, counselors, and home staff were conducted. Case studies were conducted among children in conflict with law. A checklist on 80 psychosocial problems among children in conflict with law was prepared with eight major issues identified through the quantitative process such as family and parental characteristic, family interactions and relationships, stressful life event, social and environmental factors, child’s individual characteristics, education, child labour and high-risk behavior. Standardised scales were used to identify the anxiety, caseness, suicidality and substance use among the children. This provided a background data understand the psychosocial problems experienced by children in conflict with law. In the second stage, a detailed plan of action was developed involving multiple stakeholders that include Special juvenile police unit, DCPO, JJB, and NGOs. The individual care plan was reviewed by a panel of 4 experts working in the area of children, followed by the review by multiple stakeholders in juvenile justice system such as Magistrates, JJB members, legal cum probation officers, district child protection officers, social workers and counselors. Necessary changes were made in the individual care plan in each stage which was pilot tested with 45 children for a period of one month and standardized for administering among children in conflict with law. Result: The individual care plan developed through scientific process was standardized and currently administered among children in conflict with law in the state of Kerala in the 3 districts that will be further implemented in other 14 districts. The program was successful in developing a systematic approach for the psychosocial intervention of children in conflict with law that can be a forerunner for other states in India.

Keywords: psychosocial care, individual care plan, multidisciplinary, multisectoral

Procedia PDF Downloads 266
285 Determination of Gross Alpha and Gross Beta Activity in Water Samples by iSolo Alpha/Beta Counting System

Authors: Thiwanka Weerakkody, Lakmali Handagiripathira, Poshitha Dabare, Thisari Guruge

Abstract:

The determination of gross alpha and beta activity in water is important in a wide array of environmental studies and these parameters are considered in international legislations on the quality of water. This technique is commonly applied as screening method in radioecology, environmental monitoring, industrial applications, etc. Measuring of Gross Alpha and Beta emitters by using iSolo alpha beta counting system is an adequate nuclear technique to assess radioactivity levels in natural and waste water samples due to its simplicity and low cost compared with the other methods. Twelve water samples (Six samples of commercially available bottled drinking water and six samples of industrial waste water) were measured by standard method EPA 900.0 consisting of the gas-less, firm wear based, single sample, manual iSolo alpha beta counter (Model: SOLO300G) with solid state silicon PIPS detector. Am-241 and Sr90/ Y90 calibration standards were used to calibrate the detector. The minimum detectable activities are 2.32mBq/L and 406mBq/L, for alpha and beta activity, respectively. Each of the 2L water samples was evaporated (at low heat) to a small volume and transferred into 50mm stainless steel counting planchet evenly (for homogenization) and heated by IR lamp and the constant weighted residue was obtained. Then the samples were counted for gross alpha and beta. Sample density on the planchet area was maintained below 5mg/cm. Large quantities of solid wastes sludges and waste water are generated every year due to various industries. This water can be reused for different applications. Therefore implementation of water treatment plants and measuring water quality parameters in industrial waste water discharge is very important before releasing them into the environment. This waste may contain different types of pollutants, including radioactive substances. All these measured waste water samples having gross alpha and beta activities, lower than the maximum tolerance limits for industrial waste water discharge of industrial waste in to inland surface water, that is 10-9µCi/mL and 10-8µCi/mL for gross alpha and beta respectively (National Environmental Act, No. 47 of 1980). This is according to extraordinary gazette of the democratic socialist republic of Sri Lanka in February 2008. The measured water samples were below the recommended radioactivity levels and do not pose any radiological hazard when releasing the environment. Drinking water is an essential requirement of life. All the drinking water samples were below the permissible levels of 0.5Bq/L for gross alpha activity and 1Bq/L for gross beta activity. The values have been proposed by World Health Organization in 2011; therefore the water is acceptable for consumption of humans without any further clarification with respect to their radioactivity. As these screening levels are very low, the individual dose criterion (IDC) would usually not be exceeded (0.1mSv y⁻¹). IDC is a criterion for evaluating health risks from long term exposure to radionuclides in drinking water. Recommended level of 0.1mSv/y expressed a very low level of health risk. This monitoring work will be continued further for environmental protection purposes.

Keywords: drinking water, gross alpha, gross beta, waste water

Procedia PDF Downloads 178
284 Technological Challenges for First Responders in Civil Protection; the RESPOND-A Solution

Authors: Georgios Boustras, Cleo Varianou Mikellidou, Christos Argyropoulos

Abstract:

Summer 2021 was marked by a number of prolific fires in the EU (Greece, Cyprus, France) as well as outside the EU (USA, Turkey, Israel). This series of dramatic events have stretched national civil protection systems and first responders in particular. Despite the introduction of National, Regional and International frameworks (e.g. rescEU), a number of challenges have arisen, not only related to climate change. RESPOND-A (funded by the European Commission by Horizon 2020, Contract Number 883371) introduces a unique five-tier project architectural structure for best associating modern telecommunications technology with novel practices for First Responders of saving lives, while safeguarding themselves, more effectively and efficiently. The introduced architecture includes Perception, Network, Processing, Comprehension, and User Interface layers, which can be flexibly elaborated to support multiple levels and types of customization, so, the intended technologies and practices can adapt to any European Environment Agency (EEA)-type disaster scenario. During the preparation of the RESPOND-A proposal, some of our First Responder Partners expressed the need for an information management system that could boost existing emergency response tools, while some others envisioned a complete end-to-end network management system that would offer high Situational Awareness, Early Warning and Risk Mitigation capabilities. The intuition behind these needs and visions sits on the long-term experience of these Responders, as well, their smoldering worry that the evolving threat of climate change and the consequences of industrial accidents will become more frequent and severe. Three large-scale pilot studies are planned in order to illustrate the capabilities of the RESPOND-A system. The first pilot study will focus on the deployment and operation of all available technologies for continuous communications, enhanced Situational Awareness and improved health and safety conditions for First Responders, according to a big fire scenario in a Wildland Urban Interface zone (WUI). An important issue will be examined during the second pilot study. Unobstructed communication in the form of the flow of information is severely affected during a crisis; the flow of information between the wider public, from the first responders to the public and vice versa. Call centers are flooded with requests and communication is compromised or it breaks down on many occasions, which affects in turn – the effort to build a common operations picture for all firstr esponders. At the same time the information that reaches from the public to the operational centers is scarce, especially in the aftermath of an incident. Understandably traffic if disrupted leaves no other way to observe but only via aerial means, in order to perform rapid area surveys. Results and work in progress will be presented in detail and challenges in relation to civil protection will be discussed.

Keywords: first responders, safety, civil protection, new technologies

Procedia PDF Downloads 121
283 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 114
282 Housing Recovery in Heavily Damaged Communities in New Jersey after Hurricane Sandy

Authors: Chenyi Ma

Abstract:

Background: The second costliest hurricane in U.S. history, Sandy landed in southern New Jersey on October 29, 2012, and struck the entire state with high winds and torrential rains. The disaster killed more than 100 people, left more than 8.5 million households without power, and damaged or destroyed more than 200,000 homes across the state. Immediately after the disaster, public policy support was provided in nine coastal counties that constituted 98% of the major and severely damaged housing units in NJ overall. The programs include Individuals and Households Assistance Program, Small Business Loan Program, National Flood Insurance Program, and the Federal Emergency Management Administration (FEMA) Public Assistance Grant Program. In the most severely affected counties, additional funding was provided through Community Development Block Grant: Reconstruction, Rehabilitation, Elevation, and Mitigation Program, and Homeowner Resettlement Program. How these policies individually and as a whole impacted housing recovery across communities with different socioeconomic and demographic profiles has not yet been studied, particularly in relation to damage levels. The concept of community social vulnerability has been widely used to explain many aspects of natural disasters. Nevertheless, how communities are vulnerable has been less fully examined. Community resilience has been conceptualized as a protective factor against negative impacts from disasters, however, how community resilience buffers the effects of vulnerability is not yet known. Because housing recovery is a dynamic social and economic process that varies according to context, this study examined the path from community vulnerability and resilience to housing recovery looking at both community characteristics and policy interventions. Sample/Methods: This retrospective longitudinal case study compared a literature-identified set of pre-disaster community characteristics, the effects of multiple public policy programs, and a set of time-variant community resilience indicators to changes in housing stock (operationally defined by percent of building permits to total occupied housing units/households) between 2010 and 2014, two years before and after Hurricane Sandy. The sample consisted of 51 municipalities in the nine counties in which between 4% and 58% of housing units suffered either major or severe damage. Structural equation modeling (SEM) was used to determine the path from vulnerability to the housing recovery, via multiple public programs, separately and as a whole, and via the community resilience indicators. The spatial analytical tool ArcGIS 10.2 was used to show the spatial relations between housing recovery patterns and community vulnerability and resilience. Findings: Holding damage levels constant, communities with higher proportions of Hispanic households had significantly lower levels of housing recovery while communities with households with an adult >age 65 had significantly higher levels of the housing recovery. The contrast was partly due to the different levels of total public support the two types of the community received. Further, while the public policy programs individually mediated the negative associations between African American and female-headed households and housing recovery, communities with larger proportions of African American, female-headed and Hispanic households were “vulnerable” to lower levels of housing recovery because they lacked sufficient public program support. Even so, higher employment rates and incomes buffered vulnerability to lower housing recovery. Because housing is the "wobbly pillar" of the welfare state, the housing needs of these particular groups should be more fully addressed by disaster policy.

Keywords: community social vulnerability, community resilience, hurricane, public policy

Procedia PDF Downloads 357
281 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 227
280 Identity and Mental Adaptation of Deaf and Hard-of-Hearing Students

Authors: N. F. Mikhailova, M. E. Fattakhova, M. A. Mironova, E. V. Vyacheslavova

Abstract:

For the mental and social adaptation of the deaf and hard-of-hearing people, cultural and social aspects - the formation of identity (acculturation) and educational conditions – are highly significant. We studied 137 deaf and hard-of-hearing students in different educational situations. We used these methods: Big Five (Costa & McCrae, 1997), TRF (Becker, 1989), WCQ (Lazarus & Folkman, 1988), self-esteem, and coping strategies (Jambor & Elliott, 2005), self-stigma scale (Mikhailov, 2008). Type of self-identification of students depended on the degree of deafness, type of education, method of communication in the family: large hearing loss, education in schools for deaf, and gesture communication increased the likelihood of a 'deaf' acculturation. Less hearing loss, inclusive education in public school or school for the hearing-impaired, mixed communication in the family contributed to the formation of 'hearing' acculturation. The choice of specific coping depended on the degree of deafness: a large hearing loss increased coping 'withdrawal into the deaf world' and decreased 'bicultural skills' coping. People with mild hearing loss tended to cover-up it. In the context of ongoing discussion, we researched personality characteristics in deaf and hard on-hearing students, coping and other deafness associated factors depending on their acculturation type. Students who identified themselves with the 'hearing world' had a high self-esteem, a higher level of extraversion, self-awareness, personal resources, willingness to cooperate, better psychological health, emotional stability, higher ability to empathy, a greater satiety of life with feelings and sense and high sense of self-worth. They also actively used strategies, problem-solving, acceptance of responsibility, positive revaluation. Student who limited themselves within the culture of deaf people had more severe hearing loss and accordingly had more communication barriers. Lack of use or seldom use of coping strategies by these students point at decreased level of stress in their life. Their self-esteem have not been challenged in the specific social environment of the students with the same severity of defect, and thus this environment provided sense of comfort (we can assume that from the high scores on psychological health, personality resources, and emotional stability). Students with bicultural acculturation had higher level of psychological resources - they used Positive Reappraisal coping more often and had a higher level of psychological health. Lack of belonging to certain culture (marginality) leads to personality disintegration, social and psychological disadaptation: deaf and hard-of-hearing students with marginal identification had a lower self-estimation level, worse psychological health and personal resources, lower level of extroversion, self-confidence and life satisfaction. They, in fact, become 'risk group' (many of them dropped out of universities, divorced, and one even ended up in the ranks of ISIS). All these data argue the importance of cultural 'anchor' for people with hearing deprivation. Supported by the RFBR No 19-013-00406.

Keywords: acculturation, coping, deafness, marginality

Procedia PDF Downloads 184
279 Barriers to Tuberculosis Detection in Portuguese Prisons

Authors: M. F. Abreu, A. I. Aguiar, R. Gaio, R. Duarte

Abstract:

Background: Prison establishments constitute high-risk environments for the transmission and spread of tuberculosis (TB), given their epidemiological context and the difficulty of implementing preventive and control measures. Guidelines for control and prevention of tuberculosis in prisons have been described as incomplete and heterogeneous internationally, due to several identified obstacles, for example scarcity of human resources and funding of prisoner health services. In Portugal, a protocol was created in 2014 with the aim to define and standardize procedures of detection and prevention of tuberculosis within prisons. Objective: The main objective of this study was to identify and describe barriers to tuberculosis detection in prisons of Porto and Lisbon districts in Portugal. Methods: A cross-sectional study was conducted from 2ⁿᵈ January 2018 till 30ᵗʰ June 2018. Semi-structured questionnaires were applied to health care professionals working in the prisons of the districts of Porto (n=6) and Lisbon (n=8). As inclusion criteria we considered having work experience in the area of tuberculosis (either in diagnosis, treatment, or follow up). The questionnaires were self-administered, in paper format. Descriptive analyses of the questionnaire variables were made using frequencies and median. Afterwards, a hierarchical agglomerative clusters analysis was performed. After obtaining the clusters, the chi-square test was applied to study the association between the variables collected and the clusters. The level of significance considered was 0.05. Results: From the total of 186 health professionals, 139 met the criteria of inclusion and 82 health professionals were interviewed (62,2% of participation). Most were female, nurses, with a median age of 34 years, with term employment contract. From the cluster analysis, two groups were identified with different characteristics and behaviors for the procedures of this protocol. Statistically significant results were found in: elements of cluster 1 (78% of the total participants) work in prisons for a longer time (p=0.003), 45,3% work > 4 years while 50% of the elements of cluster 2 work for less than a year, and more frequently answered they know and apply the procedures of the protocol (p=0.000). Both clusters answered frequently the need of having theoretical-practical training for TB (p=0.000), especially in the areas of diagnosis, treatment and prevention and that there is scarcity of funding to prisoner health services (p=0.000). Regarding procedures for TB screening (periodic and contact screening) and procedures for transferring a prisoner with this disease, cluster 1 also answered more frequently to perform them (p=0.000). They also referred that the material/equipment for TB screening is accessible and available (p=0.000). From this clusters we identified as barriers scarcity of human resources, the need to theoretical-practical training for tuberculosis, inexperience in working in health services prisons and limited knowledge of protocol procedures. Conclusions: The barriers found in this study are the same described internationally. This protocol is mostly being applied in portuguese prisons. The study also showed the need to invest in human and material resources. This investigation bridged gaps in knowledge that could help prison health services optimize the care provided for early detection and adherence of prisoners to treatment of tuberculosis.

Keywords: barriers, health care professionals, prisons, protocol, tuberculosis

Procedia PDF Downloads 125
278 Epidemiology of Healthcare-Associated Infections among Hematology/Oncology Patients: Results of a Prospective Incidence Survey in a Tunisian University Hospital

Authors: Ezzi Olfa, Bouafia Nabiha, Ammar Asma, Ben Cheikh Asma, Mahjoub Mohamed, Bannour Wadiaa, Achour Bechir, Khelif Abderrahim, Njah Mansour

Abstract:

Background: In hematology/oncology, health care improvement has allowed increasingly aggressive management in diagnostic and therapeutic procedures. Nevertheless, these intensified procedures have been associated with higher risk of healthcare associated infections (HAIs). We undertook this study to estimate the burden of HAIs in the cancer patients in an onco -hematology unit in a Tunisian university hospital. Materials/Methods: A prospective, observational study, based on active surveillance for a period of 06 months from Mars through September 2016, was undertaken in the department of onco-hematology in a university hospital in Tunisia. Patients, who stayed in the unit for ≥ 48 h, were followed until hospital discharge. The Centers for Disease Control and Prevention criteria (CDC) for site-specific infections were used as standard definitions for HAIs. Results: One hundred fifty patients were included in the study. The gender distribution was 33.3% for girls and 66.6% boys. They have a mean age of 23.12 years (SD = 18.36 years). The main patient’s diagnosis is: Acute Lymphoblastic Leukemia (ALL): 48.7 %( n=73). The mean length of stay was 21 days +/- 18 days. Almost 8% of patients had an implantable port (n= 12), 34.9 % (n=52) had a lumber puncture and 42.7 % (n= 64) had a medullary puncture. Chemotherapy was instituted in 88% of patients (n=132). Eighty (53.3%) patients had neutropenia at admission. The incidence rate of HAIs was 32.66 % per patient; the incidence density was 15.73 per 1000 patient-days in the unit. Mortality rate was 9.3% (n= 14), and 50% of cases of death were caused by HAIs. The most frequent episodes of infection were: infection of skin and superficial mucosa (5.3%), pulmonary aspergillosis (4.6%), Healthcare associated pneumonia (HAP) (4%), Central venous catheter associated infection (4%), digestive infection (5%), and primary bloodstream infection (2.6%). Finally, fever of unknown origin (FUO) incidence rate was 14%. In case of skin and superficial infection (n= 8), 4 episodes were documented, and organisms implicated were Escherichia.coli, Geotricum capitatum and Proteus mirabilis. For pulmonary aspergillosis, 6 cases were diagnosed clinically and radiologically, and one was proved by positive aspergillus antigen in bronchial aspiration. Only one patient died due this infection. In HAP (6 cases), four episodes were diagnosed clinically and radiologically. No bacterial etiology was established in these cases. Two patients died due to HAP. For primary bloodstream infection (4 cases), implicated germs were Enterobacter cloacae, Geotricum capitatum, klebsiella pneumoniae, and Streptococcus pneumoniae. Conclusion: This type of prospective study is an indispensable tool for internal quality control. It is necessary to evaluate preventive measures and design control guides and strategies aimed to reduce the HAI’s rate and the morbidity and mortality associated with infection in a hematology/oncology unit.

Keywords: cohort prospective studies, healthcare associated infections, hematology oncology department, incidence

Procedia PDF Downloads 375
277 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach

Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft

Abstract:

Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.

Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology

Procedia PDF Downloads 95
276 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data

Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito

Abstract:

Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.

Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement

Procedia PDF Downloads 366
275 Quasi-Federal Structure of India: Fault-Lines Exposed in COVID-19 Pandemic

Authors: Shatakshi Garg

Abstract:

As the world continues to grapple with the COVID-19 pandemic, India, one of the most populous democratic federal developing nation, continues to report the highest active cases and deaths, as well as struggle to let its health infrastructure not succumb to the exponentially growing requirements of hospital beds, ventilators, oxygen to save thousands of lives daily at risk. In this context, the paper outlines the handling of the COVID-19 pandemic since it first hit India in January 2020 – the policy decisions taken by the Union and the State governments from the larger perspective of its federal structure. The Constitution of India adopted in 1950 enshrined the federal relations between the Union and the State governments by way of the constitutional division of revenue-raising and expenditure responsibilities. By way of the 72nd and 73rd Amendments in the Constitution, powers and functions were devolved further to the third tier, namely the local governments, with the intention of further strengthening the federal structure of the country. However, with time, several constitutional amendments have shifted the scales in favour of the union government. The paper briefly traces some of these major amendments as well as some policy decisions which made the federal relations asymmetrical. As a result, data on key fiscal parameters helps establish how the union government gained upper hand at the expense of weak state governments, reducing the local governments to mere constitutional bodies without adequate funds and fiscal autonomy to carry out the assigned functions. This quasi-federal structure of India with the union government amassing the majority of power in terms of ‘funds, functions and functionaries’ exposed the perils of weakening sub-national governments post COVID-19 pandemic. With a complex quasi-federal structure and a heterogeneous population of over 1.3 billion, the announcement of a sudden nationwide lockdown by the union government was followed by a plight of migrants struggling to reach homes safely in the absence of adequate arrangements for travel and safety-net made by the union government. With limited autonomy enjoyed by the states, they were mostly dictated by the union government on most aspects of handling the pandemic, including protocols for lockdown, re-opening post lockdown, and vaccination drive. The paper suggests that certain policy decisions like demonetization, the introduction of GST, etc., taken by the incumbent government since 2014 when they first came to power, have further weakened the states and local governments, which have amounted to catastrophic losses, both economic and human. The role of the executive, legislature and judiciary are explored to establish how all these three arms of the government have worked simultaneously to further weaken and expose the fault-lines of the federal structure of India, which has lent the nation incapacitated to handle this pandemic. The paper then suggests the urgency of re-looking at the federal structure of the country and undertaking measures that strengthen the sub-national governments and restore the federal spirit as was enshrined in the constitution to avoid mammoth human and economic losses from a pandemic of this sort.

Keywords: COVID-19 pandemic, India, federal structure, economic losses

Procedia PDF Downloads 157
274 Knowledge and Practices on Waste Disposal Management Among Medical Technology Students at National University – Manila

Authors: John Peter Dacanay, Edison Ramos, Cristopher James Dicang

Abstract:

Waste management is a global concern due to increasing waste production from changing consumption patterns and population growth. Proper waste disposal management is a critical aspect of public health and environmental protection. In the healthcare industry, medical waste is generated in large quantities, and if not disposed of properly, it poses a significant threat to human health and the environment. Efficient waste management conserves natural resources and prevents harm to human health, and implementing an effective waste management system can save human lives. The study aimed to assess the level of awareness and practices on waste disposal management, highlighting the understanding of proper disposal, potential hazards, and environmental implications among Medical Technology students. This would help to provide more recommendations for improving waste management practices in healthcare settings as well as for better waste management practices in educational institutions. From the collected data, a female of 21 years of age stands out among the respondents. With the frequency and percentage of medical technology students' knowledge of laboratory waste management being high, it indicates that all respondents demonstrated a solid understanding of proper disposal methods, regulations, risks, and handling procedures related to laboratory waste. That said, the findings emphasize the significance of education and awareness programs in equipping individuals involved in laboratory practices with the necessary knowledge to handle and dispose of hazardous and infectious waste properly. Most respondents demonstrate positive practices or are highly mannered in laboratory waste management, including proper segregation and disposal in designated containers. However, there are concerns about the occasional mixing of waste types, emphasizing the reiteration of proper waste segregation. Students show a strong commitment to using personal protective equipment and promptly cleaning up spills. Some students admit to improper disposal due to rushing, highlighting the importance of time management and safety prioritization. Overall, students follow protocols for hazardous waste disposal, indicating a responsible approach. The school's waste management system is perceived as adequate, but continuous assessment and improvement are necessary. Encouraging reporting of issues and concerns is crucial for ongoing improvement and risk mitigation. The analysis reveals a moderate positive relationship between the respondents' knowledge and practices regarding laboratory waste management. The statistically significant correlation with a p-value of 0.26 (p-value 0.05) suggests that individuals with higher levels of knowledge tend to exhibit better practices. These findings align with previous research emphasizing the pivotal role of knowledge in influencing individuals' behaviors and practices concerning laboratory waste management. When individuals possess a comprehensive understanding of proper procedures, regulations, and potential risks associated with laboratory waste, they are more inclined to adopt appropriate practices. Therefore, fostering knowledge through education and training is essential in promoting responsible and effective waste management in laboratory settings.

Keywords: waste disposal management, knowledge, attitude, practices

Procedia PDF Downloads 68
273 Complete Genome Sequence Analysis of Pasteurella multocida Subspecies multocida Serotype A Strain PMTB2.1

Authors: Shagufta Jabeen, Faez J. Firdaus Abdullah, Zunita Zakaria, Nurulfiza M. Isa, Yung C. Tan, Wai Y. Yee, Abdul R. Omar

Abstract:

Pasteurella multocida (PM) is an important veterinary opportunistic pathogen particularly associated with septicemic pasteurellosis, pneumonic pasteurellosis and hemorrhagic septicemia in cattle and buffaloes. P. multocida serotype A has been reported to cause fatal pneumonia and septicemia. Pasteurella multocida subspecies multocida of serotype A Malaysian isolate PMTB2.1 was first isolated from buffaloes died of septicemia. In this study, the genome of P. multocida strain PMTB2.1 was sequenced using third-generation sequencing technology, PacBio RS2 system and analyzed bioinformatically via de novo analysis followed by in-depth analysis based on comparative genomics. Bioinformatics analysis based on de novo assembly of PacBio raw reads generated 3 contigs followed by gap filling of aligned contigs with PCR sequencing, generated a single contiguous circular chromosome with a genomic size of 2,315,138 bp and a GC content of approximately 40.32% (Accession number CP007205). The PMTB2.1 genome comprised of 2,176 protein-coding sequences, 6 rRNA operons and 56 tRNA and 4 ncRNAs sequences. The comparative genome sequence analysis of PMTB2.1 with nine complete genomes which include Actinobacillus pleuropneumoniae, Haemophilus parasuis, Escherichia coli and five P. multocida complete genome sequences including, PM70, PM36950, PMHN06, PM3480, PMHB01 and PMTB2.1 was carried out based on OrthoMCL analysis and Venn diagram. The analysis showed that 282 CDs (13%) are unique to PMTB2.1and 1,125 CDs with orthologs in all. This reflects overall close relationship of these bacteria and supports the classification in the Gamma subdivision of the Proteobacteria. In addition, genomic distance analysis among all nine genomes indicated that PMTB2.1 is closely related with other five Pasteurella species with genomic distance less than 0.13. Synteny analysis shows subtle differences in genetic structures among different P.multocida indicating the dynamics of frequent gene transfer events among different P. multocida strains. However, PM3480 and PM70 exhibited exceptionally large structural variation since they were swine and chicken isolates. Furthermore, genomic structure of PMTB2.1 is more resembling that of PM36950 with a genomic size difference of approximately 34,380 kb (smaller than PM36950) and strain-specific Integrative and Conjugative Elements (ICE) which was found only in PM36950 is absent in PMTB2.1. Meanwhile, two intact prophages sequences of approximately 62 kb were found to be present only in PMTB2.1. One of phage is similar to transposable phage SfMu. The phylogenomic tree was constructed and rooted with E. coli, A. pleuropneumoniae and H. parasuis based on OrthoMCL analysis. The genomes of P. multocida strain PMTB2.1 were clustered with bovine isolates of P. multocida strain PM36950 and PMHB01 and were separated from avian isolate PM70 and swine isolates PM3480 and PMHN06 and are distant from Actinobacillus and Haemophilus. Previous studies based on Single Nucleotide Polymorphism (SNPs) and Multilocus Sequence Typing (MLST) unable to show a clear phylogenetic relatedness between Pasteurella multocida and the different host. In conclusion, this study has provided insight on the genomic structure of PMTB2.1 in terms of potential genes that can function as virulence factors for future study in elucidating the mechanisms behind the ability of the bacteria in causing diseases in susceptible animals.

Keywords: comparative genomics, DNA sequencing, phage, phylogenomics

Procedia PDF Downloads 170
272 Assessment of Environmental Mercury Contamination from an Old Mercury Processing Plant 'Thor Chemicals' in Cato Ridge, KwaZulu-Natal, South Africa

Authors: Yohana Fessehazion

Abstract:

Mercury is a prominent example of a heavy metal contaminant in the environment, and it has been extensively investigated for its potential health risk in humans and other organisms. In South Africa, massive mercury contamination happened in1980s when the England-based mercury reclamation processing plant relocated to Cato Ridge, KwaZulu-Natal Province, and discharged mercury waste into the Mngceweni River. This mercury waste discharge resulted in high mercury concentration that exceeded the acceptable levels in Mngceweni River, Umgeni River, and human hair of the nearby villagers. This environmental issue raised the alarm, and over the years, several environmental assessments were reported the dire environmental crises resulting from the Thor Chemicals (now known as Metallica Chemicals) and urged the immediate removal of the around 3,000 tons of mercury waste stored in the factory storage facility over two decades. Recently theft of some containers with the toxic substance from the Thor Chemicals warehouse and the subsequent fire that ravaged the facility furtherly put the factory on the spot escalating the urgency of left behind deadly mercury waste removal. This project aims to investigate the mercury contamination leaking from an old Thor Chemicals mercury processing plant. The focus will be on sediments, water, terrestrial plants, and aquatic weeds such as the prominent water hyacinth weeds in the nearby water systems of Mngceweni River, Umgeni River, and Inanda Dam as a bio-indicator and phytoremediator for mercury pollution. Samples will be collected in spring around October when the condition is favourable for microbial activity to methylate mercury incorporated in sediments and blooming season for some aquatic weeds, particularly water hyacinth. Samples of soil, sediment, water, terrestrial plant, and aquatic weed will be collected per sample site from the point of source (Thor Chemicals), Mngceweni River, Umgeni River, and the Inanda Dam. One-way analysis of variance (ANOVA) tests will be conducted to determine any significant differences in the Hg concentration among all sampling sites, followed by Least Significant Difference post hoc test to determine if mercury contamination varies with the gradient distance from the source point of pollution. The flow injection atomic spectrometry (FIAS) analysis will also be used to compare the mercury sequestration between the different plant tissues (roots and stems). The principal component analysis is also envisaged for use to determine the relationship between the source of mercury pollution and any of the sampling points (Umgeni and Mngceweni Rivers and the Inanda Dam). All the Hg values will be expressed in µg/L or µg/g in order to compare the result with the previous studies and regulatory standards. Sediments are expected to have relatively higher levels of Hg compared to the soils, and aquatic macrophytes, water hyacinth weeds are expected to accumulate a higher concentration of mercury than terrestrial plants and crops.

Keywords: mercury, phytoremediation, Thor chemicals, water hyacinth

Procedia PDF Downloads 198
271 Dynamic Facades: A Literature Review on Double-Skin Façade with Lightweight Materials

Authors: Victor Mantilla, Romeu Vicente, António Figueiredo, Victor Ferreira, Sandra Sorte

Abstract:

Integrating dynamic facades into contemporary building design is shaping a new era of energy efficiency and user comfort. These innovative facades, often constructed using lightweight construction systems and materials, offer an opportunity to have a responsive and adaptive nature to the dynamic behavior of the outdoor climate. Therefore, in regions characterized by high fluctuations in daily temperatures, the ability to adapt to environmental changes is of paramount importance and a challenge. This paper presents a thorough review of the state of the art on double-skin facades (DSF), focusing on lightweight solutions for the external envelope. Dynamic facades featuring elements like movable shading devices, phase change materials, and advanced control systems have revolutionized the built environment. They offer a promising path for reducing energy consumption while enhancing occupant well-being. Lightweight construction systems are increasingly becoming the choice for the constitution of these facade solutions, offering benefits such as reduced structural loads and reduced construction waste, improving overall sustainability. However, the performance of dynamic facades based on low thermal inertia solutions in climatic contexts with high thermal amplitude is still in need of research since their ability to adapt is traduced in variability/manipulation of the thermal transmittance coefficient (U-value). Emerging technologies can enable such a dynamic thermal behavior through innovative materials, changes in geometry and control to optimize the facade performance. These innovations will allow a facade system to respond to shifting outdoor temperature, relative humidity, wind, and solar radiation conditions, ensuring that energy efficiency and occupant comfort are both met/coupled. This review addresses the potential configuration of double-skin facades, particularly concerning their responsiveness to seasonal variations in temperature, with a specific focus on addressing the challenges posed by winter and summer conditions. Notably, the design of a dynamic facade is significantly shaped by several pivotal factors, including the choice of materials, geometric considerations, and the implementation of effective monitoring systems. Within the realm of double skin facades, various configurations are explored, encompassing exhaust air, supply air, and thermal buffering mechanisms. According to the review places a specific emphasis on the thermal dynamics at play, closely examining the impact of factors such as the color of the facade, the slat angle's dimensions, and the positioning and type of shading devices employed in these innovative architectural structures.This paper will synthesize the current research trends in this field, with the presentation of case studies and technological innovations with a comprehensive understanding of the cutting-edge solutions propelling the evolution of building envelopes in the face of climate change, namely focusing on double-skin lightweight solutions to create sustainable, adaptable, and responsive building envelopes. As indicated in the review, flexible and lightweight systems have broad applicability across all building sectors, and there is a growing recognition that retrofitting existing buildings may emerge as the predominant approach.

Keywords: adaptive, control systems, dynamic facades, energy efficiency, responsive, thermal comfort, thermal transmittance

Procedia PDF Downloads 58
270 Microstructural Characterization of Bitumen/Montmorillonite/Isocyanate Composites by Atomic Force Microscopy

Authors: Francisco J. Ortega, Claudia Roman, Moisés García-Morales, Francisco J. Navarro

Abstract:

Asphaltic bitumen has been largely used in both industrial and civil engineering, mostly in pavement construction and roofing membrane manufacture. However, bitumen as such is greatly susceptible to temperature variations, and dramatically changes its in-service behavior from a viscoelastic liquid, at medium-high temperatures, to a brittle solid at low temperatures. Bitumen modification prevents these problems and imparts improved performance. Isocyanates like polymeric MDI (mixture of 4,4′-diphenylmethane di-isocyanate, 2,4’ and 2,2’ isomers, and higher homologues) have shown to remarkably enhance bitumen properties at the highest in-service temperatures expected. This comes from the reaction between the –NCO pendant groups of the oligomer and the most polar groups of asphaltenes and resins in bitumen. In addition, oxygen diffusion and/or UV radiation may provoke bitumen hardening and ageing. With the purpose of minimizing these effects, nano-layered-silicates (nanoclays) are increasingly being added to bitumen formulations. Montmorillonites, a type of naturally occurring mineral, may produce a nanometer scale dispersion which improves bitumen thermal, mechanical and barrier properties. In order to increase their lipophilicity, these nanoclays are normally treated so that organic cations substitute the inorganic cations located in their intergallery spacing. In the present work, the combined effect of polymeric MDI and the commercial montmorillonite Cloisite® 20A was evaluated. A selected bitumen with penetration within the range 160/220 was modified with 10 wt.% Cloisite® 20A and 2 wt.% polymeric MDI, and the resulting ternary composites were characterized by linear rheology, X-ray diffraction (XRD) and Atomic Force Microscopy (AFM). The rheological tests evidenced a notable solid-like behavior at the highest temperatures studied when bitumen was just loaded with 10 wt.% Cloisite® 20A and high-shear blended for 20 minutes. However, if polymeric MDI was involved, the sequence of addition exerted a decisive control on the linear rheology of the final ternary composites. Hence, in bitumen/Cloisite® 20A/polymeric MDI formulations, the previous solid-like behavior disappeared. By contrast, an inversion in the order of addition (bitumen/polymeric MDI/ Cloisite® 20A) enhanced further the solid-like behavior imparted by the nanoclay. In order to gain a better understanding of the factors that govern the linear rheology of these ternary composites, a morphological and microstructural characterization based on XRD and AFM was conducted. XRD demonstrated the existence of clay stacks intercalated by bitumen molecules to some degree. However, the XRD technique cannot provide detailed information on the extent of nanoclay delamination, unless the entire fraction has effectively been fully delaminated (situation in which no peak is observed). Furthermore, XRD was unable to provide precise knowledge neither about the spatial distribution of the intercalated/exfoliated platelets nor about the presence of other structures at larger length scales. In contrast, AFM proved its power at providing conclusive information on the morphology of the composites at the nanometer scale and at revealing the structural modification that yielded the rheological properties observed. It was concluded that high-shear blending brought about a nanoclay-reinforced network. As for the bitumen/Cloisite® 20A/polymeric MDI formulations, the solid-like behavior was destroyed as a result of the agglomeration of the nanoclay platelets promoted by chemical reactions.

Keywords: Atomic Force Microscopy, bitumen, composite, isocyanate, montmorillonite.

Procedia PDF Downloads 244
269 A Randomized Active Controlled Clinical Trial to Assess Clinical Efficacy and Safety of Tapentadol Nasal Spray in Moderate to Severe Post-Surgical Pain

Authors: Kamal Tolani, Sandeep Kumar, Rohit Luthra, Ankit Dadhania, Krishnaprasad K., Ram Gupta, Deepa Joshi

Abstract:

Background: Post-operative analgesia remains a clinical challenge, with central and peripheral sensitization playing a pivotal role in treatment-related complications and impaired quality of life. Centrally acting opioids offer poor risk benefit profile with increased intensity of gastrointestinal or central side effects and slow onset of clinical analgesia. The objective of this study was to assess the clinical feasibility of induction and maintenance therapy with Tapentadol Nasal Spray (NS) in moderate to severe acute post-operative pain. Methods: Phase III, randomized, active-controlled, non-inferiority clinical trial involving 294 cases who had undergone surgical procedures under general anesthesia or regional anesthesia. Post-surgery patients were randomized to receive either Tapentadol NS 45 mg or Tramadol 100mg IV as a bolus and subsequent 50 mg or 100 mg dose over 2-3 minutes. The frequency of administration of NS was at every 4-6 hours. At the end of 24 hrs, patients in the tramadol group who had a pain intensity score of ≥4 were switched to oral tramadol immediate release 100mg capsule until the pain intensity score reduced to <4. All patients who had achieved pain intensity ≤ 4 were shifted to a lower dose of either Tapentadol NS 22.5 mg or oral Tramadol immediate release 50mg capsule. The statistical analysis plan was envisaged as a non-inferiority trial involving comparison with Tramadol for Pain intensity difference at 60 minutes (PID60min), Sum of Pain intensity difference at 60 minutes (SPID60min), and Physician Global Assessment at 24 hrs (PGA24 hrs). Results: The per-protocol analyses involved 255 hospitalized cases undergoing surgical procedures. The median age of patients was 38.0 years. For the primary efficacy variables, Tapentadol NS was non-inferior to Inj/Oral Tramadol in relief of moderate to severe post-operative pain. On the basis of SPID60min, no clinically significant difference was observed between Tapentadol NS and Tramadol IV (1.73±2.24 vs. 1.64± 1.92, -0.09 [95% CI, -0.43, 0.60]). In the co-primary endpoint PGA24hrs, Tapentadol NS was non–inferior to Tramadol IV (2.12 ± 0.707 vs. 2.02 ±0.704, - 0.11[95% CI, -0.07, 0.28). However, on further assessment at 48hr, 72 hrs, and 120hrs, clinically superior pain relief was observed with the Tapentadol NS formulation that was statistically significant (p <0.05) at each of the time intervals. Secondary efficacy measures, including the onset of clinical analgesia and TOTPAR, showed non-inferiority to Tramadol. The safety profile and need for rescue medication were also similar in both the groups during the treatment period. The most common concomitant medications were anti-bacterial (98.3%). Conclusion: Tapentadol NS is a clinically feasible option for improved compliance as induction and maintenance therapy while offering a sustained and persistent patient response that is clinically meaningful in post-surgical settings.

Keywords: tapentadol nasal spray, acute pain, tramadol, post-operative pain

Procedia PDF Downloads 223
268 Solymorph: Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance

Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi

Abstract:

Solymorph, a kinetic building facade designed for optimal energy capture and architectural expression, is explored in this paper. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of novel facade systems is necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, Solymorph leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, 3D printing, and laser cutting, were utilized to fabricate the physical components. Finally, a modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of Solymorph to an existing library building at Politecnico di Milano is presented. The facade system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. Solymorph demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, Solymorph paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.

Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building

Procedia PDF Downloads 36
267 Numerical Prediction of Width Crack of Concrete Dapped-End Beams

Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo

Abstract:

Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.

Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis

Procedia PDF Downloads 148
266 Harnessing Renewable Energy as a Strategy to Combating Climate Change in Sub Saharan Africa

Authors: Gideon Nyuimbe Gasu

Abstract:

Sub Saharan Africa is at a critical point, experiencing rapid population growth, particularly in urban areas and young growing force. At the same time, the growing risk of catastrophic global climate change threatens to weaken food production system, increase intensity and frequency of drought, flood, and fires and undermine gains on development and poverty reduction. Although the region has the lowest per capital greenhouse gas emission level in the world, it will need to join global efforts to address climate change, including action to avoid significant increases and to encourage a green economy. Thus, there is a need for the concept of 'greening the economy' as was prescribed at Rio Summit of 1992. Renewable energy is one of the criterions to achieve this laudable goal of maintaining a green economy. There is need to address climate change while facilitating continued economic growth and social progress as energy today is critical to economic growth. Fossil fuels remain the major contributor of greenhouse gas emission. Thus, cleaner technologies such as carbon capture storage, renewable energy have emerged to be commercially competitive. This paper sets out to examine how to achieve a low carbon economy with minimal emission of carbon dioxide and other greenhouse gases which is one of the outcomes of implementing a green economy. Also, the paper examines the different renewable energy sources such as nuclear, wind, hydro, biofuel, and solar voltaic as a panacea to the looming climate change menace. Finally, the paper assesses the different renewable energy and energy efficiency as a propeller to generating new sources of income and jobs and in turn reduces carbon emission. The research shall engage qualitative, evaluative and comparative methods. The research will employ both primary and secondary sources of information. The primary sources of information shall be drawn from the sub Saharan African region and the global environmental organizations, energy legislation, policies and related industries and the judicial processes. The secondary sources will be made up of some books, journal articles, commentaries, discussions, observations, explanations, expositions, suggestions, prescriptions and other material sourced from the internet on renewable energy as a panacea to climate change. All information obtained from these sources will be subject to content analysis. The research result will show that the entire planet is warming as a result of the activities of mankind which is clear evidence that the current development is fundamentally unsustainable. Equally, the study will reveal that a low carbon development pathway in the sub Saharan African region should be embraced to minimize emission of greenhouse gases such as using renewable energy rather than coal, oil, and gas. The study concludes that until adequate strategies are devised towards the use of renewable energy the region will continue to add and worsen the current climate change menace and other adverse environmental conditions.

Keywords: carbon dioxide, climate change, legislation/law, renewable energy

Procedia PDF Downloads 207
265 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan

Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad

Abstract:

Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.

Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules

Procedia PDF Downloads 84
264 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains

Authors: Jing Jin

Abstract:

The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.

Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry

Procedia PDF Downloads 40
263 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea

Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal

Abstract:

Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.

Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism

Procedia PDF Downloads 252