Search results for: Damage scenarios
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3759

Search results for: Damage scenarios

609 Household Earthquake Absorptive Capacity Impact on Food Security: A Case Study in Rural Costa Rica

Authors: Laura Rodríguez Amaya

Abstract:

The impact of natural disasters on food security can be devastating, especially in rural settings where livelihoods are closely tied to their productive assets. In hazards studies, absorptive capacity is seen as a threshold that impacts the degree of people’s recovery after a natural disaster. Increasing our understanding of households’ capacity to absorb natural disaster shocks can provide the international community with viable measurements for assessing at-risk communities’ resilience to food insecurities. The purpose of this study is to identify the most important factors in determining a household’s capacity to absorb the impact of a natural disaster. This is an empirical study conducted in six communities in Costa Rica affected by earthquakes. The Earthquake Impact Index was developed for the selection of the communities in this study. The households coded as total loss in the selected communities constituted the sampling frame from which the sample population was drawn. Because of the study area geographically dispersion over a large surface, the stratified clustered sampling hybrid technique was selected. Of the 302 households identified as total loss in the six communities, a total of 126 households were surveyed, constituting 42 percent of the sampling frame. A list of indicators compiled based on theoretical and exploratory grounds for the absorptive capacity construct served to guide the survey development. These indicators were included in the following variables: (1) use of informal safety nets, (2) Coping Strategy, (3) Physical Connectivity, and (4) Infrastructure Damage. A multivariate data analysis was conducted using Statistical Package for Social Sciences (SPSS). The results show that informal safety nets such as family and friends assistance exerted the greatest influence on the ability of households to absorb the impact of earthquakes. In conclusion, communities that experienced the highest environmental impact and human loss got disconnected from the social networks needed to absorb the shock’s impact. This resulted in higher levels of household food insecurity.

Keywords: absorptive capacity, earthquake, food security, rural

Procedia PDF Downloads 253
608 Down Regulation of Smad-2 Transcription and TGF-B1 Signaling in Nano Sized Titanium Dioxide-Induced Liver Injury in Mice by Potent Antioxidants

Authors: Maha Z. Rizk, Sami A. Fattah, Heba M. Darwish, Sanaa A. Ali, Mai O. Kadry

Abstract:

Although it is known that nano-TiO2 and other nanoparticles can induce liver toxicity, the mechanisms and the molecular pathogenesis are still unclear. The present study investigated some biochemical indices of nano-sized Titanium dioxide (TiO2 NPS) toxicity in mice liver and the ameliorative efficacy of individual and combined doses of idebenone, carnosine and vitamin E. Nano-anatase TiO2 (21 nm) was administered as a total oral dose of 2.2 gm/Kg daily for 2 weeks followed by the afore-mentioned antioxidants daily either individually or in combination for 1month. TiO2-NPS induced a significant elevation in serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST) and hepatic oxidative stress biomarkers [lipid peroxides (LP), and nitric oxide levels (NOX), while it significantly reduced glutathione reductase (GR), reduced glutathione (GSH) and glutathione peroxidase(GPX) levels. Moreover the quantitative RT-PCR analysis showed that nano-anatase TiO2 can significantly alter the mRNA and protein expressions of the fibrotic factors TGF-B1, VEGFand Smad-2. Histopathological examination of hepatic tissue reinforced the previous biochemical results. Our results also implied that inflammatory responses and liver injury may be involved in nano-anatase TiO2-induced liver toxicity Tumor necrosis factor-α (TNF-α) and Interleukin -6 (IL-6) and increased the percent of DNA damage which was assessed by COMET assay in addition to the apoptotic marker Caspase-3. Moreover mRNA gene expression observed by RT-PCR showed a significant overexpression in nuclear factor relation -2 (Nrf2), nuclear factor kappa beta (NF-Kβ) and the apoptotic factor (bax), and a significant down regulation in the antiapoptotic factor (bcl2) level. In conclusion idebenone, carnosine and vitamin E ameliorated the deviated previously mentioned parameters with variable degrees with the most pronounced role in alleviating the hazardous effect of TiO2 NPS toxicity following the combination regimen.

Keywords: Nano-anatase TiO2, TGF-B1, SMAD-2

Procedia PDF Downloads 423
607 Reduction Shrinkage of Concrete without Use Reinforcement

Authors: Martin Tazky, Rudolf Hela, Lucia Osuska, Petr Novosad

Abstract:

Concrete’s volumetric changes are natural process caused by silicate minerals’ hydration. These changes can lead to cracking and subsequent destruction of cementitious material’s matrix. In most cases, cracks can be assessed as a negative effect of hydration, and in all cases, they lead to an acceleration of degradation processes. Preventing the formation of these cracks is, therefore, the main effort. Once of the possibility how to eliminate this natural concrete shrinkage process is by using different types of dispersed reinforcement. For this application of concrete shrinking, steel and polymer reinforcement are preferably used. Despite ordinarily used reinforcement in concrete to eliminate shrinkage it is possible to look at this specific problematic from the beginning by itself concrete mix composition. There are many secondary raw materials, which are helpful in reduction of hydration heat and also with shrinkage of concrete during curing. The new science shows the possibilities of shrinkage reduction also by the controlled formation of hydration products, which could act by itself morphology as a traditionally used dispersed reinforcement. This contribution deals with the possibility of controlled formation of mono- and tri-sulfate which are considered like degradation minerals. Mono- and tri- sulfate's controlled formation in a cementitious composite can be classified as a self-healing ability. Its crystal’s growth acts directly against the shrinking tension – this reduces the risk of cracks development. Controlled formation means that these crystals start to grow in the fresh state of the material (e.g. concrete) but stop right before it could cause any damage to the hardened material. Waste materials with the suitable chemical composition are very attractive precursors because of their added value in the form of landscape pollution’s reduction and, of course, low cost. In this experiment, the possibilities of using the fly ash from fluidized bed combustion as a mono- and tri-sulphate formation additive were investigated. The experiment itself was conducted on cement paste and concrete and specimens were subjected to a thorough analysis of physicomechanical properties as well as microstructure from the moment of mixing up to 180 days. In cement composites, were monitored the process of hydration and shrinkage. In a mixture with the used admixture of fluidized bed combustion fly ash, possible failures were specified by electronic microscopy and dynamic modulus of elasticity. The results of experiments show the possibility of shrinkage concrete reduction without using traditionally dispersed reinforcement.

Keywords: shrinkage, monosulphates, trisulphates, self-healing, fluidized fly ash

Procedia PDF Downloads 185
606 Epidemiology of Hepatitis B and Hepatitis C Viruses Among Pregnant Women at Queen Elizabeth Central Hospital, Malawi

Authors: Charles Bijjah Nkhata, Memory Nekati Mvula, Milton Masautso Kalongonda, Martha Masamba, Isaac Thom Shawa

Abstract:

Viral Hepatitis is a serious public health concern globally with deaths estimated at 1.4 million annually due to liver fibrosis, cirrhosis, and hepatocellular carcinoma. Hepatitis B and C are the most common viruses that cause liver damage. However, the majority of infected individuals are unaware of their serostatus. Viral Hepatitis has contributed to maternal and neonatal morbidity and mortality. There is no updated data on the Epidemiology of hepatitis B and C among pregnant mothers in Malawi. To assess the epidemiology of Hepatitis B and C viruses among pregnant women at Queen Elizabeth Central Hospital. Specific Objectives • To determine sero-prevalence of HBsAg and Anti-HCV in pregnant women at QECH. • To investigate risk factors associated with HBV and HCV infection in pregnant women. • To determine the distribution of HBsAg and Anti-HCV infection among pregnant women of different age group. A descriptive cross-sectional study was conducted among pregnant women at QECH in last quarter of 2021. Of the 114 pregnant women, 96 participants were consented and enrolled using a convenient sampling technique. 12 participants were dropped due to various reasons; therefore 84 completed the study. A semi-structured questionnaire was used to collect socio-demographic and behavior characteristics to assess the risk of exposure. Serum was processed from venous blood samples and tested for HBsAg and Anti-HCV markers utilizing Rapid screening assays for screening and Enzyme Linked Immunosorbent Assay for confirmatory. A total of 84 pregnant consenting pregnant women participated in the study, with 1.2% (n=1/84) testing positive for HBsAg and nobody had detectable anti-HCV antibodies. There was no significant link between HBV and HCV in any of the socio-demographic data or putative risk variables. The findings indicate a viral hepatitis prevalence lower than the set range by the WHO. This suggests that HBV and HCV are rare in pregnant women at QECH. Nevertheless, accessible screening for all pregnant women should be provided. The prevention of MTCT is key for reduction and prevention of the global burden of chronic viral Hepatitis.

Keywords: viral hepatitis, hepatitis B, hepatitis C, pregnancy, malawi, liver disease, mother to child transmission

Procedia PDF Downloads 168
605 Menopause Hormone Therapy: An Insight into Knowledge and Attitudes of Obstetricians and Gynecologists in Singapore

Authors: Tan Hui Ying Renee, Stella Rizalina Sasha, Farah Safdar Husain

Abstract:

Introduction: Menopause Hormone Therapy (MHT) is an effective drug indicated for the treatment of menopausal symptoms and as replacement therapy in women who undergo premature menopause. In 2020, less than 8.8% of perimenopausal Singaporean women are on hormonal therapy, as compared to the Western population, where up to 50% may be on MHT. Factors associated with MHT utilization have been studied from patient characteristics, but the impact of locally prescribing physicians resulting in low MHT utilization has yet to be evaluated. The aim of the study is to determine the level of knowledge physicians in the Obstetrics and Gynaecology specialty have and their attitudes toward MHT. We believe that knowledge of MHT is lacking and that negative attitudes towards MHT may influence its use and undermine the benefits MHT may have for women. This paper is a part of a larger study on Singaporean physicians’ knowledge and attitudes towards MHT. Methods: This is a cross-sectional study intended to assess the knowledge and attitudes of physicians toward Menopausal Hormone Therapy. An anonymous questionnaire was disseminated via institutional internal circulations to optimize reach to physicians who may prescribe MHT, particularly in the fields of Gynaecology, Family Medicine and Endocrinology. Responses were completed voluntarily. Physicians had the option for each question to declare that they were ‘unsure’ or that the question was ‘beyond their expertise’. 21 knowledge questions tested factual recall on indications, contraindications, and risks of MHT. The remaining 6 questions were clinical scenarios crafted with the intention of testing specific principles related to the use of MHT. These questions received face validation from experts in the field. 198 responses were collected, 79 of which were from physicians in the Obstetrics and Gynaecology specialty. The data will be statistically analyzed to investigate areas that can be improved to increase the overall benefits of MHT for the Singaporean population. Results: Preliminary results show that the prevailing factors that limit Singaporean gynecologists and obstetricians from prescribing MHT are a lack of knowledge of MHT and a lack of confidence in prescribing MHT. Risks and indications of MHT were not well known by many physicians, with the majority of the questions having more than 25% incorrect and ‘unsure’ as their reply. The clinical scenario questions revealed significant shortcomings in knowledge on how to navigate real-life challenges in MHT use, with 2 of 6 questions with more than 50% incorrect or ‘beyond their expertise’ as their reply. The portion of the questionnaire that investigated the attitudes of physicians showed that though a large majority believed MHT to be an effective drug, only 40.5% were confident in prescribing it. Conclusion: Physicians in the Obstetrics and Gynaecology specialty lack knowledge and confidence in MHT. Therefore, it is imperative to formulate solutions on both the individual and institutional levels to fill these gaps and ensure that MHT is used appropriately and prescribed to the patients who need it.

Keywords: menopause, menopause hormone therapy, physician factors, obstetrics and gynecology, menopausal symptoms, Singapore

Procedia PDF Downloads 39
604 Personalized Climate Change Advertising: The Role of Augmented Reality (A.R.) Technology in Encouraging Users for Climate Change Action

Authors: Mokhlisur Rahman

Abstract:

The growing consensus among scientists and world leaders indicates that immediate action should be considered regarding the climate change phenomenon. However, climate change is no more a global issue but a personal one. Thus, individual participation is necessary to address such a significant issue. Studies show that individuals who perceive climate change as a personal issue are more likely to act toward it. This abstract presents augmented reality (A.R.) technology in the social media platform Facebook video advertising. The idea involves creating a video advertisement that enables users to interact with the video by navigating its features and experiencing the result uniquely and engagingly. This advertisement uses A.R. to bring changes, such as people making changes in real-life scenarios by simple clicks on the video and hearing an instant rewarding fact about their choices. The video shows three options: room, lawn, and driveway. Users select one option and engage in interaction based on while holding the camera in their personal spaces: Suppose users select the first option, room, and hold their camera toward spots such as by the windows, balcony, corners, and even walls. In that case, the A.R. offers users different plants appropriate for those unoccupied spaces in the room. Users can change the options of the plants and see which space at their house deserves a plant that makes it more natural. When a user adds a natural element to the video, the video content explains a piece of beneficiary information about how the user contributes to the world more to be livable and why it is necessary. With the help of A.R., if users select the second option, lawn, and hold their camera toward their lawn, the options are various small trees for their lawn to make it more environmentally friendly and decorative. The video plays a beneficiary explanation here too. Suppose users select the third option, driveway, and hold their camera toward their driveway. In that case, the A.R. video option offers unique recycle bin designs using A.I. measurement of spaces. The video plays audio information on anthropogenic contribution to greenhouse gas emission. IoT embeds tracking code in the video ad on Facebook, which stores the exact number of views in the cloud for data analysis. An online survey at the end collects short qualitative answers. This study helps understand the number of users involved and willing to change their behavior; It makes personalized advertising in social media. Considering the current state of climate change, the urgency for action is increasing. This ad increases the chance to make direct connections with individuals and gives a sense of personal responsibility for climate change to act

Keywords: motivations, climate, iot, personalized-advertising, action

Procedia PDF Downloads 72
603 Anti-Colitic and Anti-Inflammatory Effects of Lactobacillus sakei K040706 in Mice with Ulcerative Colitis

Authors: Seunghwan Seo, Woo-Seok Lee, Ji-Sun Shin, Young Kyoung Rhee, Chang-Won Cho, Hee-Do Hong, Kyung-Tae Lee

Abstract:

Doenjang, known as traditional Korean food, is product of a natural mixed fermentation process carried out by lactic acid bacteria (LAB). Lactobacillus sakei K040706 (K040706) has been accepted as the most populous LAB in over ripened doenjang. Recently, we reported the immunostimulatory effects of K040706 in RAW 264.7 macrophages and in a cyclophosphamide-induced mouse model. In this study, we investigated the ameliorative effects of K040706 in a dextran sulfate sodium (DSS)-induced colitis mouse model. We induced colitis using DSS in 5-week-ICR mice over 14 days with or without 0.1, 1 g/kg/day K040706 orally. The body weight, stool consistency, and gross bleeding were recorded for determination of the disease activity index (DAI). At the end of treatment, animals were sacrificed and colonic tissues were collected and subjected to histological experiments and myeloperoxidase (MPO) accumulation, cytokine determination, qRT-PCR and Western blot analysis. Results showed that K040706 significantly attenuated DSS-induced DAI score, shortening of colon length, enlargement of spleen and immune cell infiltrations into colonic tissues. Histological examinations indicated that K040706 suppressed edema, mucosal damage, and the loss of crypts induced by DSS. These results were correlated with the restoration of tight junction protein expression, such as, ZO-1 and occludin in K040706-treated mice. Moreover, K040706 reduced the abnormal secretions and mRNA expressions of pro-inflammatory mediators, such as nitric oxide (NO), tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), and interleukin-6 (IL-6). DSS-induced mRNA expression of intercellular adhesion molecule (ICAM) and vascular cell adhesion molecule (VCAM) in colonic tissues was also downregulated by K040706 treatment. Furthermore, K040706 suppressed the protein and mRNA expression of toll-like receptor 4 (TLR4) and phosphorylation of NF-κB and signal transducer and activator of transcription 3 (STAT3). These results suggest that K040706 has an anti-colitic effect by inhibition of intestinal inflammatory responses in DSS-induced colitic mice.

Keywords: Lactobacillus sakei, NF-κB, STAT3, ulcerative colitis

Procedia PDF Downloads 324
602 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review

Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari

Abstract:

The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.

Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency

Procedia PDF Downloads 161
601 The Sustainable Development for Coastal Tourist Building

Authors: D. Avila

Abstract:

The tourism industry is a phenomenon that has become a growing presence in international socio-economic dynamics, which in most cases exceeds the control parameters in the various environmental regulations and sustainability of existing resources. Because of this, the effects on the natural environment at the regional and national levels represent a challenge, for which a number of strategies are necessary to minimize the environmental impact generated by the occupation of the territory. The hotel tourist building and sustainable development in the coastal zone, have an important impact on the environment and on the physical and psychological health of the inhabitants. Environmental quality associated with the comfort of humans to the sustainable development of natural resources; applied to the hotel architecture this concept involves the incorporation of new demands on all of the constructive process of a building, changing customs of developers and users. The methodology developed provides an initial analysis to determine and rank the different tourist buildings, with the above it will be feasible to establish methods of study and environmental impact assessment. Finally, it is necessary to establish an overview regarding the best way to implement tourism development on the coast, containing guidelines to improve and protect the natural environment. This paper analyzes the parameters and strategies to reduce environmental impacts derived from deployments tourism on the coast, through a series of recommendations towards sustainability, in the context of the Bahia de Banderas, Puerto Vallarta, Jalisco. The environmental impact caused by the implementation of tourism development, perceived in a coastal environment, forcing a series of processes, ranging from the identification of impacts, prediction and evaluation of them. For this purpose are described below, different techniques and valuation procedures: Identification of impacts. Methods for the identification of damage caused to the environment pursue general purpose to obtain a group of negative indicators that are subsequently used in the study of environmental impact. There are several systematic methods to identify the impacts caused by human activities. In the present work, develops a procedure based and adapted from the Ministry of works public urban reference in studies of environmental impacts, the representative methods are: list of contrast, arrays, and networks, method of transparencies and superposition of maps.

Keywords: environmental impact, physical health, sustainability, tourist building

Procedia PDF Downloads 329
600 A Review on the Vulnerability of Rural-Small Scale Farmers to Insect Pest Attacks in the Eastern Cape Province, South Africa

Authors: Nolitha L. Skenjana, Bongani P. Kubheka, Maxwell A. Poswal

Abstract:

The Eastern Cape Province of South Africa is characterized by subsistence farming, which is mostly distributed in the rural areas of the province. It is estimated that cereal crops such as maize and sorghum, and vegetables such as cabbage are grown in more than 400.000 rural households, with maize being the most dominant crop. However, compared to commercial agriculture, small-scale farmers receive minimal support from research and development, limited technology transfer on the latest production practices and systems and have poor production infrastructure and equipment. Similarly, there is limited farmers' appreciation on best practices in insect pest management and control. The paper presents findings from the primary literature and personal observations on insect pest management practices of small-scale farmers in the province. Inferences from literature and personal experiences in the production areas have led to a number of deductions regarding the level of exposure and extent of vulnerability. Farmers' pest management practices, which included not controlling at all though there is a pest problem, resulted in their crop stands to be more vulnerable to pest attacks. This became more evident with the recent brown locust, African armyworm, and Fall armyworm outbreaks, and with the incidences of opportunistic phytophagous insects previously collected on wild hosts only, found causing serious damages on crops. In most of these occurrences, damage to crops resulted in low or no yield. Improvements on farmers' reaction and response to pest problems were only observed in areas where focused awareness campaigns and trainings on specific pests and their management techniques were done. This then calls for a concerted effort from all role players in the sphere of small-scale crop production, to train and equip farmers with relevant skills, and provide them with information on affordable and climate-smart strategies and technologies in order to create a state of preparedness. This is necessary for the prevention of substantial crop losses that may exacerbate food insecurity in the province.

Keywords: Eastern Cape Province, small-scale farmers, insect pest management, vulnerability

Procedia PDF Downloads 139
599 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 237
598 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles

Authors: Antonina A. Shumakova, Sergey A. Khotimchenko

Abstract:

The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.

Keywords: absorption, cadmium, engineered nanomaterials, lead

Procedia PDF Downloads 86
597 NanoFrazor Lithography for advanced 2D and 3D Nanodevices

Authors: Zhengming Wu

Abstract:

NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.

Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits

Procedia PDF Downloads 71
596 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050

Authors: Ali Hashemifarzad, Jens Zum Hingst

Abstract:

The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.

Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production

Procedia PDF Downloads 133
595 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 74
594 Evaluation of the Efficacy and Tolerance of Gabapentin in the Treatment of Neuropathic Pain

Authors: A. Ibovi Mouondayi, S. Zaher, R. Assadi, K. Erraoui, S. Sboul, J. Daoudim, S. Bousselham, K. Nassar, S. Janani

Abstract:

INTRODUCTION: Neuropathic pain (NP) caused by damage to the somatosensory nervous system has a significant impact on quality of life and is associated with a high economic burden on the individual and society. The treatment of neuropathic pain consists of the use of a wide range of therapeutic agents, including gabapentin, which is used in the treatment of neuropathic pain. OBJECTIF: The objective of this study was to evaluate the efficacy and tolerance of gabapentin in the treatment of neuropathic pain. MATERIAL AND METHOD: This is a monocentric, cross-sectional, descriptive, retrospective study conducted in our department over a period of 19 months from October 2020 to April 2022. The missing parameters were collected during phone calls of the patients concerned. The diagnostic tool adopted was the DN4 questionnaire in the dialectal Arabic version. The impact of NP was assessed by the visual analog scale (VAS) on pain, sleep, and function. The impact of PN on mood was assessed by the "Hospital anxiety, and depression scale HAD" score in the validated Arabic version. The exclusion criteria were patients followed up for depression and other psychiatric pathologies. RESULTS: A total of 67 patients' data were collected. The average age was 64 years (+/- 15 years), with extremes ranging from 26 years to 94 years. 58 women and 9 men with an M/F sex ratio of 0.15. Cervical radiculopathy was found in 21% of this population, and lumbosacral radiculopathy in 61%. Gabapentin was introduced in doses ranging from 300 to 1800 mg per day with an average dose of 864 mg (+/- 346) per day for an average duration of 12.6 months. Before treatment, 93% of patients had a non-restorative sleep quality (VAS>3). 54% of patients had a pain VAS greater than 5. The function was normal in only 9% of patients. The mean anxiety score was 3.25 (standard deviation: 2.70), and the mean HAD depression score was 3.79 (standard deviation: 1.79). After treatment, all patients had improved the quality of their sleep (p<0.0001). A significant difference was noted in pain VAS, function, as well as anxiety and depression, and HAD score. Gabapentin was stopped for side effects (dizziness and drowsiness) and/or unsatisfactory response. CONCLUSION: Our data demonstrate a favorable effect of gabapentin on the management of neuropathic pain with a significant difference before and after treatment on the quality of life of patients associated with an acceptable tolerance profile.

Keywords: neuropathic pain, chronic pain, treatment, gabapentin

Procedia PDF Downloads 93
593 Effective Medium Approximations for Modeling Ellipsometric Responses from Zinc Dialkyldithiophosphates (ZDDP) Tribofilms Formed on Sliding Surfaces

Authors: Maria Miranda-Medina, Sara Salopek, Andras Vernes, Martin Jech

Abstract:

Sliding lubricated surfaces induce the formation of tribofilms that reduce friction, wear and prevent large-scale damage of contact parts. Engine oils and lubricants use antiwear and antioxidant additives such as zinc dialkyldithiophosphate (ZDDP) from where protective tribofilms are formed by degradation. The ZDDP tribofilms are described as a two-layer structure composed of inorganic polymer material. On the top surface, the long chain polyphosphate is a zinc phosphate and in the bulk, the short chain polyphosphate is a mixed Fe/Zn phosphate with a gradient concentration. The polyphosphate chains are partially adherent to steel surface through a sulfide and work as anti-wear pads. In this contribution, ZDDP tribofilms formed on gray cast iron surfaces are studied. The tribofilms were generated in a reciprocating sliding tribometer with a piston ring-cylinder liner configuration. Fully formulated oil of SAE grade 5W-30 was used as lubricant during two tests at 40Hz and 50Hz. For the estimation of the tribofilm thicknesses, spectroscopic ellipsometry was used due to its high accuracy and non-destructive nature. Ellipsometry works under an optical principle where the change in polarisation of light reflected by the surface, is associated with the refractive index of the surface material or to the thickness of the layer deposited on top. Ellipsometrical responses derived from tribofilms are modelled by effective medium approximation (EMA), which includes the refractive index of involved materials, homogeneity of the film and thickness. The materials composition was obtained from x-ray photoelectron spectroscopic studies, where the presence of ZDDP, O and C was confirmed. From EMA models it was concluded that tribofilms formed at 40 Hz are thicker and more homogeneous than the ones formed at 50 Hz. In addition, the refractive index of each material is mixed to derive an effective refractive index that describes the optical composition of the tribofilm and exhibits a maximum response in the UV range, being a characteristic of glassy semitransparent films.

Keywords: effective medium approximation, reciprocating sliding tribometer, spectroscopic ellipsometry, zinc dialkyldithiophosphate

Procedia PDF Downloads 250
592 Characterization of Articular Cartilage Based on the Response of Cartilage Surface to Loading/Unloading

Authors: Z. Arabshahi, I. Afara, A. Oloyede, H. Moody, J. Kashani, T. Klein

Abstract:

Articular cartilage is a fluid-swollen tissue of synovial joints that functions by providing a lubricated surface for articulation and to facilitate the load transmission. The biomechanical function of this tissue is highly dependent on the integrity of its ultrastructural matrix. Any alteration of articular cartilage matrix, either by injury or degenerative conditions such as osteoarthritis (OA), compromises its functional behaviour. Therefore, the assessment of articular cartilage is important in early stages of degenerative process to prevent or reduce further joint damage with associated socio-economic impact. Therefore, there has been increasing research interest into the functional assessment of articular cartilage. This study developed a characterization parameter for articular cartilage assessment based on the response of cartilage surface to loading/unloading. This is because the response of articular cartilage to compressive loading is significantly depth-dependent, where the superficial zone and underlying matrix respond differently to deformation. In addition, the alteration of cartilage matrix in the early stages of degeneration is often characterized by PG loss in the superficial layer. In this study, it is hypothesized that the response of superficial layer is different in normal and proteoglycan depleted tissue. To establish the viability of this hypothesis, samples of visually intact and artificially proteoglycan-depleted bovine cartilage were subjected to compression at a constant rate to 30 percent strain using a ring-shaped indenter with an integrated ultrasound probe and then unloaded. The response of articular surface which was indirectly loaded was monitored using ultrasound during the time of loading/unloading (deformation/recovery). It was observed that the rate of cartilage surface response to loading/unloading was different for normal and PG-depleted cartilage samples. Principal Component Analysis was performed to identify the capability of the cartilage surface response to loading/unloading, to distinguish between normal and artificially degenerated cartilage samples. The classification analysis of this parameter showed an overlap between normal and degenerated samples during loading. While there was a clear distinction between normal and degenerated samples during unloading. This study showed that the cartilage surface response to loading/unloading has the potential to be used as a parameter for cartilage assessment.

Keywords: cartilage integrity parameter, cartilage deformation/recovery, cartilage functional assessment, ultrasound

Procedia PDF Downloads 191
591 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 110
590 The Effect of Physical Exercise to Level of Nuclear Factor Kappa B on Serum, Macrophages and Myocytes

Authors: Eryati Darwin, Eka Fithra Elfi, Indria Hafizah

Abstract:

Background: Physical exercise induces a pattern of hormonal and immunological responses that prevent endothelial dysfunction by maintaining the availability of nitric oxide (NO). Regular and moderate exercise stimulates NO release, that can be considered as protective factor of cardiovascular diseases, while strenuous exercise induces increased levels in a number of pro-inflammatory and anti-inflammatory cytokines. Pro-inflammatory cytokines tumor necrosis factor-α (TNF-α) triggers endothelial activation which results in an increased vascular permeability. Nuclear gene factor kappa B (NF-κB) activates biological effect of TNF-α. Aim of Study: To determine the effect of physical exercise on the endothelial and skeletal muscle, we measured the level of NF-κB on rats’ serum, macrophages, and myocytes after strenuous physical exercise. Methods: 30 male Rattus norvegicus in the age of eight weeks were randomly divided into five groups (each containing six), and there were treated groups (T) and control group (C). The treated groups obtain strenuous physical exercise by ran on treadmill at 32 m/minutes for 1 hour or until exhaustion. Blood samples, myocytes of gastrocnemius muscle, and intraperitoneal macrophages were collected sequentially. There were investigated immediately, 2 hours, 6 hours, and 24 hours (T1, T2, T3, and T4) after sacrifice. The levels of NF-κB were measured by ELISA methods. Results: From our study, we found that the levels of NF-κB on myocytes in treated group from which its specimen was taken immediately (T1), 2 hours after treadmill (T2), and 6 hours after treadmill (T3) were significantly higher than control group (p<0.05), while the group from which its specimen was taken 24 hours after treadmill, was no significantly different (p>0.05). Also on macrophages, NF-κB in treated groups T1, T2, and T3 was significantly higher than control group (p<0.05), but there was no difference between T4 and control group (p>0.05). The level of serum NF-κB was not significantly different between treatment group as well as compared to control group (p>0.05). Serum NF-κB was significantly higher than the level on macrophages and myocytes (p<0.05). Conclusion: This study demonstrated that strenuous physical exercise stimulates the activation of NF-κB that plays a role in vascular inflammation and muscular damage, and may be recovered after resting period.

Keywords: endothelial function, inflammation, NFkB, physical exercise

Procedia PDF Downloads 258
589 Parametric Non-Linear Analysis of Reinforced Concrete Frames with Supplemental Damping Systems

Authors: Daniele Losanno, Giorgio Serino

Abstract:

This paper focuses on parametric analysis of reinforced concrete structures equipped with supplemental damping braces. Practitioners still luck sufficient data for current design of damper added structures and often reduce the real model to a pure damper braced structure even if this assumption is neither realistic nor conservative. In the present study, the damping brace is modelled as made by a linear supporting brace connected in series with the viscous/hysteretic damper. Deformation capacity of existing structures is usually not adequate to undergo the design earthquake. In spite of this, additional dampers could be introduced strongly limiting structural damage to acceptable values, or in some cases, reducing frame response to elastic behavior. This work is aimed at providing useful considerations for retrofit of existing buildings by means of supplemental damping braces. The study explicitly takes into consideration variability of (a) relative frame to supporting brace stiffness, (b) dampers’ coefficient (viscous coefficient or yielding force) and (c) non-linear frame behavior. Non-linear time history analysis has been run to account for both dampers’ behavior and non-linear plastic hinges modelled by Pivot hysteretic type. Parametric analysis based on previous studies on SDOF or MDOF linear frames provide reference values for nearly optimal damping systems design. With respect to bare frame configuration, seismic response of the damper-added frame is strongly improved, limiting deformations to acceptable values far below ultimate capacity. Results of the analysis also demonstrated the beneficial effect of stiffer supporting braces, thus highlighting inadequacy of simplified pure damper models. At the same time, the effect of variable damping coefficient and yielding force has to be treated as an optimization problem.

Keywords: brace stiffness, dissipative braces, non-linear analysis, plastic hinges, reinforced concrete frames

Procedia PDF Downloads 289
588 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 102
587 An Artificially Intelligent Teaching-Agent to Enhance Learning Interactions in Virtual Settings

Authors: Abdulwakeel B. Raji

Abstract:

This paper introduces a concept of an intelligent virtual learning environment that involves communication between learners and an artificially intelligent teaching agent in an attempt to replicate classroom learning interactions. The benefits of this technology over current e-learning practices is that it creates a virtual classroom where real time adaptive learning interactions are made possible. This is a move away from the static learning practices currently being adopted by e-learning systems. Over the years, artificial intelligence has been applied to various fields, including and not limited to medicine, military applications, psychology, marketing etc. The purpose of e-learning applications is to ensure users are able to learn outside of the classroom, but a major limitation has been the inability to fully replicate classroom interactions between teacher and students. This study used comparative surveys to gain information and understanding of the current learning practices in Nigerian universities and how they compare to these practices compare to the use of a developed e-learning system. The study was conducted by attending several lectures and noting the interactions between lecturers and tutors and as an aftermath, a software has been developed that deploys the use of an artificial intelligent teaching-agent alongside an e-learning system to enhance user learning experience and attempt to create the similar learning interactions to those found in classroom and lecture hall settings. Dialogflow has been used to implement a teaching-agent, which has been developed using JSON, which serves as a virtual teacher. Course content has been created using HTML, CSS, PHP and JAVASCRIPT as a web-based application. This technology can run on handheld devices and Google based home technologies to give learners an access to the teaching agent at any time. This technology also implements the use of definite clause grammars and natural language processing to match user inputs and requests with defined rules to replicate learning interactions. This technology developed covers familiar classroom scenarios such as answering users’ questions, asking ‘do you understand’ at regular intervals and answering subsequent requests, taking advanced user queries to give feedbacks at other periods. This software technology uses deep learning techniques to learn user interactions and patterns to subsequently enhance user learning experience. A system testing has been undergone by undergraduate students in the UK and Nigeria on the course ‘Introduction to Database Development’. Test results and feedback from users shows that this study and developed software is a significant improvement on existing e-learning systems. Further experiments are to be run using the software with different students and more course contents.

Keywords: virtual learning, natural language processing, definite clause grammars, deep learning, artificial intelligence

Procedia PDF Downloads 134
586 Determination of the Toxicity of a Lunar Dust Simulant on Human Alveolar Epithelial Cells and Macrophages in vitro

Authors: Agatha Bebbington, Terry Tetley, Kathryn Hadler

Abstract:

Background: Astronauts will set foot on the Moon later this decade, and are at high risk of lunar dust inhalation. Freshly-fractured lunar dust produces reactive oxygen species in solution, which are known to cause cellular damage and inflammation. Cytotoxicity and inflammatory mediator release was measured in pulmonary alveolar epithelial cells (cells that line the gas-exchange zone of the lung) exposed to a lunar dust simulant, LMS-1. It was hypothesised that freshly-fractured LMS-1 would result in increased cytotoxicity and inflammatory mediator release, owing to the angular morphology and high reactivity of fractured particles. Methods: A human alveolar epithelial type 1-like cell line (TT1) and a human macrophage-like cell line (THP-1) were exposed to 0-200μg/ml of unground, aged-ground, and freshly-ground LMS-1 (screened at <22μm). Cell viability, cytotoxicity, and inflammatory mediator release (IL-6, IL-8) were assessed using MMT, LDH, and ELISA assays, respectively. LMS-1 particles were characterised for their size, surface area, and morphology before and after grinding. Results: Exposure to LMS-1 particles did not result in overt cytotoxicity in either TT1 epithelial cells or THP-1 macrophage-like cells. A dose-dependent increase in IL-8 release was observed in TT1 cells, whereas THP-1 cell exposure, even at low particle concentrations, resulted in increased IL-8 release. Both cytotoxic and pro-inflammatory responses were most marked and significantly greater in TT1 and THP-1 cells exposed to freshly-fractured LMS-1. Discussion: LMS-1 is a novel lunar dust simulant; this is the first study to determine its toxicological effects on respiratory cells in vitro. An increased inflammatory response in TT1 and THP-1 cells exposed to ground LMS-1 suggests that low particle size, increased surface area, and angularity likely contribute to toxicity. Conclusions: Evenlow levels of exposure to LMS-1 could result in alveolar inflammation. This may have pathological consequences for astronauts exposed to lunar dust on future long-duration missions. Future research should test the effect of low-dose, intermittent lunar dust exposure on the respiratory system.

Keywords: lunar dust, LMS-1, lunar dust simulant, long-duration space travel, lunar dust toxicity

Procedia PDF Downloads 212
585 An Observation Approach of Reading Order for Single Column and Two Column Layout Template

Authors: In-Tsang Lin, Chiching Wei

Abstract:

Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.

Keywords: document processing, reading order, observation method, layout recognition

Procedia PDF Downloads 178
584 Therapeutic Effect of Indane 1,3-Dione Derivatives in the Restoration of Insulin Resistance in Human Liver Cells and in Db/Db Mice Model: Biochemical, Physiological and Molecular Insights of Investigation

Authors: Gulnaz Khan, Meha F. Aftab, Munazza Murtaza, Rizwana S. Waraich

Abstract:

Advanced glycation end products (AGEs) precursor and its abnormal accumulation cause damage to various tissues and organs. AGEs have pathogenic implication in several diseases including diabetes. Existing AGEs inhibitors are not in clinical use, and there is a need for development of novel inhibitors. The present investigation aimed at identifying the novel AGEs inhibitors and assessing their mechanism of action for treating insulin resistance in mice model of diabetes. Novel derivatives of benzylidene of indan-1,3-dione were synthesized. The compounds were selected to study their action mechanism in improving insulin resistance, in vitro, in human hepatocytes and murine adipocytes and then, in vivo, in mice genetic model of diabetes (db/db). Mice were treated with novel derivatives of benzylidene of indane 1,3-dione. AGEs mediated ROS production was measured by dihydroethidium fluorescence assay. AGEs level in the serum of treated mice was observed by ELISA. Gene expression of receptor for AGEs (RAGE), PPAR-gamma, TNF-alpha and GLUT-4 was evaluated by RT-PCR. Glucose uptake was measured by fluorescent method. Microscopy was used to analyze glycogen synthesis in muscle. Among several derivatives of benzylidene of indan-1,3-dione, IDD-24, demonstrated highest inhibition of AGESs. IDD-24 significantly reduced AGEs formation and expression of receptor for advanced glycation end products (RAGE) in fat, liver of db/db mice. Suppression of AGEs mediated ROS production was also observed in hepatocytes and fat cell, after treatment with IDD-24. Glycogen synthesis was increased in muscle tissue of mice treated with IDD-24. In adipocytes, IDD-24 prevented AGEs induced reduced glucose uptake. Mice treated with IDD-24 exhibited increased glucose tolerance, serum adiponectin levels and decreased insulin resistance. The result of present study suggested that IDD-24 can be a possible treatment target to address glycotoxins induced insulin resistance.

Keywords: advance glycation end product, hyperglycemia, indan-1, 3-dione, insulin resistance

Procedia PDF Downloads 157
583 Reducing the Incidence Rate of Pressure Sore in a Medical Center in Taiwan

Authors: Chang Yu Chuan

Abstract:

Background and Aim: Pressure sore is not only the consequence of any gradual damage of the skin leading to tissue defects but also an important indicator of clinical care. If hospitalized patients develop pressure sores without proper care, it would result in delayed healing, wound infection, increase patient physical pain, prolonged hospital stay and even death, which would have a negative impact on the quality of care and also increase nursing manpower and medical costs. This project is aimed at decreasing the incidence of pressure sore in one ward of internal medicine. Our data showed 53 cases (0.61%) of pressure sore in 2015, which exceeded the average (0.5%) of Taiwan Clinical Performance Indicator (TCPI) for medical centers. The purpose of this project is to reduce the incidence rate of pressure sore in the ward. After data collection and analysis from January to December 2016, the reasons of developing pressure sore were found: 1. Lack of knowledge to prevent pressure among nursing staffs; 2. No relevant courses about preventing pressure ulcers and pressure wound care being held in this unit; 3. Low complete rate of pressure sore care education that family members should receive from nursing staffs; 4. Decompression equipment is not enough; 5. Lack of standard procedures for body-turning and positioning care. After team members brainstorming, several strategies were proposed, including holding in-service education, pressure sore care seed training, purchasing decompression mattress and memory pillows, designing more elements of health education tools, such as health education pamphlet, posters and multimedia films of body-turning and positioning demonstration, formulation and promotion of standard operating procedures. In this way, nursing staffs can understand the body-turning and positioning guidelines for pressure sore prevention and enhance the quality of care. After the implementation of this project, the pressure sore density significantly decreased from 0.61%(53 cases) to 0.45%(28 cases) in this ward. The project shows good results and good example for nurses working at the ward and helps to enhance quality of care.

Keywords: body-turning and positioning, incidence density, nursing, pressure sore

Procedia PDF Downloads 266
582 Hearing Threshold Levels among Steel Industry Workers in Samut Prakan Province, Thailand

Authors: Petcharat  Kerdonfag, Surasak Taneepanichskul, Winai Wadwongtham

Abstract:

Industrial noise is usually considered as the main impact of the environmental health and safety because its exposure can cause permanently serious hearing damage. Despite providing strictly hearing protection standards and campaigning extensively encouraging public health awareness among industrial workers in Thailand, hazard noise-induced hearing loss has dramatically been massive obstacles for workers’ health. The aims of the study were to explore and specify the hearing threshold levels among steel industrial workers responsible in which higher noise levels of work zone and to examine the relationships of hearing loss and workers’ age and the length of employment in Samut Prakan province, Thailand. Cross-sectional study design was done. Ninety-three steel industrial workers in the designated zone of higher noise (> 85dBA) with more than 1 year of employment from two factories by simple random sampling and available to participate in were assessed by the audiometric screening at regional Samut Prakan hospital. Data of doing screening were collected from October to December, 2016 by the occupational medicine physician and a qualified occupational nurse. All participants were examined by the same examiners for the validity. An Audiometric testing was performed at least 14 hours after the last noise exposure from the workplace. Workers’ age and the length of employment were gathered by the developed occupational record form. Results: The range of workers’ age was from 23 to 59 years, (Mean = 41.67, SD = 9.69) and the length of employment was from 1 to 39 years, (Mean = 13.99, SD = 9.88). Fifty three (60.0%) out of all participants have been exposing to the hazard of noise in the workplace for more than 10 years. Twenty-three (24.7%) of them have been exposing to the hazard of noise less than or equal to 5 years. Seventeen (18.3%) of them have been exposing to the hazard of noise for 5 to 10 years. Using the cut point of less than or equal to 25 dBA of hearing thresholds, the average means of hearing thresholds for participants at 4, 6, and 8 kHz were 31.34, 29.62, and 25.64 dB, respectively for the right ear and 40.15, 32.20, and 25.48 dB for the left ear, respectively. The more developing age of workers in the work zone with hazard of noise, the more the hearing thresholds would be increasing at frequencies of 4, 6, and 8 kHz (p =.012, p =.026, p =.024) for the right ear, respectively and for the left ear only at the frequency 4 kHz (p =.009). Conclusion: The participants’ age in the hazard of noise work zone was significantly associated with the hearing loss in different levels while the length of participants’ employment was not significantly associated with the hearing loss. Thus hearing threshold levels among industrial workers would be regularly assessed and needed to be protected at the beginning of working.

Keywords: hearing threshold levels, hazard of noise, hearing loss, audiometric testing

Procedia PDF Downloads 227
581 Antioxidant Status in Synovial Fluid from Osteoarthritis Patients: A Pilot Study in Indian Demography

Authors: S. Koppikar, P. Kulkarni, D. Ingale , N. Wagh, S. Deshpande, A. Mahajan, A. Harsulkar

Abstract:

Crucial role of reactive oxygen species (ROS) in the progression Osteoarthritis (OA) pathogenesis has been endorsed several times though its exact mechanism remains unclear. Oxidative stress is known to instigate classical stress factors such as cytokines, chemokines and ROS, which hampers cartilage remodelling process and ultimately results in worsening the disease. Synovial fluid (SF) is a biological communicator between cartilage and synovium that accumulates redox and biochemical signalling mediators. The present work attempts to measure several oxidative stress markers in the synovial fluid obtained from knee OA patients with varying degree of disease severity. Thirty OA and five Meniscal-tear (MT) patients were graded using Kellgren-Lawrence scale and assessed for Nitric oxide (NO), Nitrate-Nitrite (NN), 2,2-diphenyl-1-picrylhydrazyl (DPPH), Ferric Reducing Antioxidant Potential (FRAP), Catalase (CAT), Superoxide dismutase (SOD) and Malondialdehyde (MDA) levels for comparison. Out of various oxidative markers studied, NO and SOD showed significant difference between moderate and severe OA (p= 0.007 and p= 0.08, respectively), whereas CAT demonstrated significant difference between MT and mild group (p= 0.07). Interestingly, NN revealed statistically positive correlation with OA severity (p= 0.001 and p= 0.003). MDA, a lipid peroxidation by-product was estimated maximum in early OA when compared to MT (p= 0.06). However, FRAP did not show any correlation with OA severity or MT control. NO is an essential bio-regulatory molecule essential for several physiological processes, and inflammatory conditions. However, due to its short life, exact estimation of NO becomes difficult. NO and its measurable stable products are still it is considered as one of the important biomarker of oxidative damage. Levels of NO and nitrite-nitrate in SF of patients with OA indicated its involvement in the disease progression. When SF groups were compared, a significant correlation among moderate, mild and MT groups was established. To summarize, present data illustrated higher levels of NO, SOD, CAT, DPPH and MDA in early OA in comparison with MT, as a control group. NN had emerged as a prognostic bio marker in knee OA patients, which may act as futuristic targets in OA treatment.

Keywords: antioxidant, knee osteoarthritis, oxidative stress, synovial fluid

Procedia PDF Downloads 475
580 Climate Change Adaptation Strategy Recommended for the Conservation of Biodiversity in Western Ghats, India

Authors: Mukesh Lal Das, Muthukumar Muthuchamy

Abstract:

Climate change Adaptation strategy (AS) is a scientific approach to dealing with the impacts of climate change (CC). Efforts are being made to contain the global emission of greenhouse gas within threshold limits, thereby limiting the rise of global temperature to an optimal level. Global Climate change is a spontaneous process; therefore, reversing the damage would take decades. The climate change adaptation strategy recommended by various stakeholders could be a key to resilience for biodiversity. The Indian Government has constituted the panel to synthesize the climate change action report at the federal and state levels. This review scavenged the published literature on the Western Ghats hotspots. And highlight the adaptation strategy recommended by diverse scientific actors to conserve biodiversity. It also reviews the grey literature adopted by state and federal governments and its effectiveness in mitigating the impacts on biodiversity. We have narrowed the scope of interest to the state action report by 6 Indian states such as Gujarat, Maharashtra, Goa, Karnataka, Kerala and Tamil Nadu, which host Western Ghats global biodiversity hotspot. Western Ghats(WGs) act as the water tower to the peninsular part of India, and its extensive watershed caters to the water demand of the Industry sector, Agriculture and urban community. Conservation of WGs is the key to the prosperity of Peninsular India. The global scientific community suggested more than 600+ Climate change adaptation strategies for the policymakers, stakeholders, and other state actors to take proactive actions. The preliminary analysis of the federal and the state action plan on climate change in the wake of CC indicate inadequacy in motion as per recommended scientific adaptation strategies. Tamil Nadu and Kerala state constitute nine effective adaptation strategies out of the 40+ recommended for Western Ghats conservation. And other four states' adaptation strategies are deficient, confusing and vague. Western Ghats' resilience capacity will soon or might have reached its threshold, and the frequency of severe drought and flash floods might upsurge manifold in the decades to come. The lack of a clear roadmap to climate change adaptation strategies in the federal and state action stirred us to identify the gap and address it by offering a holistic approach to WGs biodiversity conservation.

Keywords: adaptation strategy, biodiversity conservation, climate change, resilience, Western Ghats

Procedia PDF Downloads 103