Search results for: massive columns
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 943

Search results for: massive columns

43 Exploring the Effect of Nursing Students’ Self-Directed Learning and Technology Acceptance through the Use of Digital Game-Based Learning in Medical Terminology Course

Authors: Hsin-Yu Lee, Ming-Zhong Li, Wen-Hsi Chiu, Su-Fen Cheng, Shwu-Wen Lin

Abstract:

Background: The use of medical terminology is essential to professional nurses on clinical practice. However, most nursing students consider traditional lecture-based teaching of medical terminology as boring and overly conceptual and lack motivation to learn. It is thus an issue to be discussed on how to enhance nursing students’ self-directed learning and improve learning outcomes of medical terminology. Digital game-based learning is a learner-centered way of learning. Past literature showed that the most common game-based learning for language education has been immersive games and teaching games. Thus, this study selected role-playing games (RPG) and digital puzzle games for observation and comparison. It is interesting to explore whether digital game-based learning has positive impact on nursing students’ learning of medical terminology and whether students can adapt well on this type of learning. Results can be used to provide references for institutes and teachers on teaching medical terminology. These instructions give you guidelines for preparing papers for the conference. Use this document as a template if you are using Microsoft Word. Otherwise, use this document as an instruction set. The electronic file of your paper will be formatted further at WASET. Define all symbols used in the abstract. Do not cite references in the abstract. Do not delete the blank line immediately above the abstract; it sets the footnote at the bottom of this column. Page margins are 1,78 cm top and down; 1,65 cm left and right. Each column width is 8,89 cm and the separation between the columns is 0,51 cm. Objective: The purpose of this research is to explore respectively the impact of RPG and puzzle game on nursing students’ self-directed learning and technology acceptance. The study further discusses whether different game types bring about different influences on students’ self-directed learning and technology acceptance. Methods: A quasi-experimental design was adopted in this study so that repeated measures between two groups could be conveniently conducted. 103 nursing students from a nursing college in Northern Taiwan participated in the study. For three weeks of experiment, the experiment group (n=52) received “traditional teaching + RPG” while the control group (n=51) received “traditional teaching + puzzle games”. Results: 1. On self-directed learning: For each game type, there were significant differences for the delayed tests of both groups as compared to the pre and post-tests of each group. However, there were no significant differences between the two game types. 2. On technology acceptance: For the experiment group, after the intervention of RPG, there were no significant differences concerning technology acceptance. For the control group, after the intervention of puzzle games, there were significant differences regarding technology acceptance. Pearson-correlation coefficient and path analysis conducted on the results of the two groups revealed that the dimension were highly correlated and reached statistical significance. Yet, the comparison of technology acceptance between the two game types did not reach statistical significance. Conclusion and Recommend: This study found that through using different digital games on learning, nursing students have effectively improved their self-directed learning. Students’ technology acceptances were also high for the two different digital game types and each dimension was significantly correlated. The results of the experimental group showed that through the scenarios of RPG, students had a deeper understanding of medical terminology, which reached the ‘Understand’ dimension of Bloom’s taxonomy. The results of the control group indicated that digital puzzle games could help students memorize and review medical terminology, which reached the ‘Remember’ dimension of Bloom’s taxonomy. The findings suggest that teachers of medical terminology could use digital games to assist their teaching according to their goals on cognitive learning. Adequate use of those games could help improve students’ self-directed learning and further enhance their learning outcome on medical terminology.

Keywords: digital game-based learning, medical terminology, nursing education, self-directed learning, technology acceptance model

Procedia PDF Downloads 166
42 Thermal Ageing of a 316 Nb Stainless Steel: From Mechanical and Microstructural Analyses to Thermal Ageing Models for Long Time Prediction

Authors: Julien Monnier, Isabelle Mouton, Francois Buy, Adrien Michel, Sylvain Ringeval, Joel Malaplate, Caroline Toffolon, Bernard Marini, Audrey Lechartier

Abstract:

Chosen to design and assemble massive components for nuclear industry, the 316 Nb austenitic stainless steel (also called 316 Nb) suits well this function thanks to its mechanical, heat and corrosion handling properties. However, these properties might change during steel’s life due to thermal ageing causing changes within its microstructure. Our main purpose is to determine if the 316 Nb will keep its mechanical properties after an exposition to industrial temperatures (around 300 °C) during a long period of time (< 10 years). The 316 Nb is composed by different phases, which are austenite as main phase, niobium-carbides, and ferrite remaining from the ferrite to austenite transformation during the process. Our purpose is to understand thermal ageing effects on the material microstructure and properties and to submit a model predicting the evolution of 316 Nb properties as a function of temperature and time. To do so, based on Fe-Cr and 316 Nb phase diagrams, we studied the thermal ageing of 316 Nb steel alloys (1%v of ferrite) and welds (10%v of ferrite) for various temperatures (350, 400, and 450 °C) and ageing time (from 1 to 10.000 hours). Higher temperatures have been chosen to reduce thermal treatment time by exploiting a kinetic effect of temperature on 316 Nb ageing without modifying reaction mechanisms. Our results from early times of ageing show no effect on steel’s global properties linked to austenite stability, but an increase of ferrite hardness during thermal ageing has been observed. It has been shown that austenite’s crystalline structure (cfc) grants it a thermal stability, however, ferrite crystalline structure (bcc) favours iron-chromium demixion and formation of iron-rich and chromium-rich phases within ferrite. Observations of thermal ageing effects on ferrite’s microstructure were necessary to understand the changes caused by the thermal treatment. Analyses have been performed by using different techniques like Atomic Probe Tomography (APT) and Differential Scanning Calorimetry (DSC). A demixion of alloy’s elements leading to formation of iron-rich (α phase, bcc structure), chromium-rich (α’ phase, bcc structure), and nickel-rich (fcc structure) phases within the ferrite have been observed and associated to the increase of ferrite’s hardness. APT results grant information about phases’ volume fraction and composition, allowing to associate hardness measurements to the volume fractions of the different phases and to set up a way to calculate α’ and nickel-rich particles’ growth rate depending on temperature. The same methodology has been applied to DSC results, which allowed us to measure the enthalpy of α’ phase dissolution between 500 and 600_°C. To resume, we started from mechanical and macroscopic measurements and explained the results through microstructural study. The data obtained has been match to CALPHAD models’ prediction and used to improve these calculations and employ them to predict 316 Nb properties’ change during the industrial process.

Keywords: stainless steel characterization, atom probe tomography APT, vickers hardness, differential scanning calorimetry DSC, thermal ageing

Procedia PDF Downloads 92
41 Legume Grain as Alternative to Soya Bean Meal in Small Ruminant Diets

Authors: Abidi Sourour, Ben Salem Hichem, Zoghlemi Aziza, Mezni Mejid, Nasri Saida

Abstract:

In Tunisia, there is an urgent need to maintain food security by reversing soil degradation and improving crop and livestock productivity. Conservation Agriculture (CA) can be helpful in enhancing crop productivity and soil health. However, the demand for crop residues as animal feed are among the major constraints for the adoption of CA. Thus, the objective of this trial is to test the nutritional value of new forage mixture hays as alternative to cereal residues. Two tri-specific cereal-legume mixture were studied and compared to the classic Vetch-Oat one. They were implemented at farm level in four regions characterized by sub-humi climatic: V70-A15-T15 (Vetch70% - Oat15% -Triticale15%) installed in two sites (Zhir and safasaf), V60-A7-T33 (Vetch60% - Oat7% -Triticale33%) and V70-A30 (Vetch70%-Oat30%). Results revealed a significant variation between mixtures V70-A15-T15 installed at Safsafa, recorded the highest forage yield with 12t DM ha-1 than V60A7T33 and V70A30 installed, respectively in ksar cheikh and Fernana with 11.6 and 11.2.tMSha-1. The same mixture installed in Safsafa gave 22% less yields than the one installed in Safsafa. In fact, the month of March was dry in Z'hir. Moreover, these yields in DM can be comparable to those observed by Yucel and Avci (2009). The CP contents of the samples studied vary significantly between the mixtures (P<0.0003). V70-A15-T15 installed in Safsaf and V70A30 present higher contents of CP (respectively 14.4 and 13.7% DM) compared to the other mixtures. These contents are explained by the high proportion of vetch in the fourth mixture and by the low proportion of weeds in the second. In all cases, the hay produced from these mixtures is significantly richer in protein than that of oats in pure culture (Abdelraheem et al., 2019). The positive correlation between the CP content and the proportion of vetch explains this superior quality. The NDF and ADF contents were similar for all mixtures. These values were similar to those reported in the literature (Abidi and Benyoussef, 2019; Haj-Ayed and al., 2000). In general, the Land Equivalent Ratio (LER) was significantly greater than 1 for the vetch-oat-triticale mixture at Zhiir and Safsafa and also for the vetch-oat a at Fernana, proving that they are more productive in intercropping than in pure culture. For the Ksar Cheikh site, the LER value of the vetch-oat-triticale mixture is maintained at around 1. Proving the absence of the advantage of mixture culture compared to pure culture. This proves the massive presence of weeds interferes with the two partners of the mixture increases. The LER for the vetch-oat mixture reached its maximum in March 13 and decreases in April but remained above 1. This proves that the tutoring power of oats showed itself in a constant way until an advanced stage since the variety used is characterized by very thick stems, protecting it from the risk of lodging. These forages mixture present a promising option, a high nutritional quality that could reduce the use of concentrate and, therefore, the cost of feed. With such feed value, these mixtures allow good animal performance.

Keywords: soybean, lupine, vetch, lamb-ADG, meat

Procedia PDF Downloads 87
40 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer

Authors: Binder Hans

Abstract:

Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.

Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas

Procedia PDF Downloads 148
39 Potential for Massive Use of Biodiesel for Automotive in Italy

Authors: Domenico Carmelo Mongelli

Abstract:

The context of this research is that of the Italian reality, which, in order to adapt to the EU Directives that prohibit the production of internal combustion engines in favor of electric mobility from 2035, is extremely concerned about the significant loss of jobs resulting from the difficulty of the automotive industry in converting in such a short time and due to the reticence of potential buyers in the face of such an epochal change. The aim of the research is to evaluate for Italy the potential of the most valid alternative to this transition to electric: leaving the current production of diesel engines unchanged, no longer powered by gasoil, imported and responsible for greenhouse gas emissions, but powered entirely by a nationally produced and eco-sustainable fuel such as biodiesel. Today in Italy, the percentage of biodiesel mixed with gasoil for diesel engines is too low (around 10%); for this reason, this research aims to evaluate the functioning of current diesel engines powered 100% by biodiesel and the ability of the Italian production system to cope to this hypothesis. The research geographically identifies those abandoned lands in Italy, now out of the food market, which is best suited to an energy crop for the final production of biodiesel. The cultivation of oilseeds is identified, which for the Italian agro-industrial reality allows maximizing the agricultural and industrial yields of the transformation of the agricultural product into a final energy product and minimizing the production costs of the entire agro-industrial chain. To achieve this objective, specific databases are used, and energy and economic balances are prepared for the different agricultural product alternatives. Solutions are proposed and tested that allow the optimization of all production phases in both the agronomic and industrial phases. The biodiesel obtained from the most feasible of the alternatives examined is analyzed, and its compatibility with current diesel engines is identified, and from the evaluation of its thermo-fluid-dynamic properties, the engineering measures that allow the perfect functioning of current internal combustion engines are examined. The results deriving from experimental tests on the engine bench are evaluated to evaluate the performance of different engines fueled with biodiesel alone in terms of power, torque, specific consumption and useful thermal efficiency and compared with the performance of engines fueled with the current mixture of fuel on the market. The results deriving from experimental tests on the engine bench are evaluated to evaluate the polluting emissions of engines powered only by biodiesel and compared with current emissions. At this point, we proceed with the simulation of the total replacement of gasoil with biodiesel as a fuel for the current fleet of diesel vehicles in Italy, drawing the necessary conclusions in technological, energy, economic, and environmental terms and in terms of social and employment implications. The results allow us to evaluate the potential advantage of a total replacement of diesel fuel with biodiesel for powering road vehicles with diesel cycle internal combustion engines without significant changes to the current vehicle fleet and without requiring future changes to the automotive industry.

Keywords: biodiesel, economy, engines, environment

Procedia PDF Downloads 73
38 Assessment of Environmental Mercury Contamination from an Old Mercury Processing Plant 'Thor Chemicals' in Cato Ridge, KwaZulu-Natal, South Africa

Authors: Yohana Fessehazion

Abstract:

Mercury is a prominent example of a heavy metal contaminant in the environment, and it has been extensively investigated for its potential health risk in humans and other organisms. In South Africa, massive mercury contamination happened in1980s when the England-based mercury reclamation processing plant relocated to Cato Ridge, KwaZulu-Natal Province, and discharged mercury waste into the Mngceweni River. This mercury waste discharge resulted in high mercury concentration that exceeded the acceptable levels in Mngceweni River, Umgeni River, and human hair of the nearby villagers. This environmental issue raised the alarm, and over the years, several environmental assessments were reported the dire environmental crises resulting from the Thor Chemicals (now known as Metallica Chemicals) and urged the immediate removal of the around 3,000 tons of mercury waste stored in the factory storage facility over two decades. Recently theft of some containers with the toxic substance from the Thor Chemicals warehouse and the subsequent fire that ravaged the facility furtherly put the factory on the spot escalating the urgency of left behind deadly mercury waste removal. This project aims to investigate the mercury contamination leaking from an old Thor Chemicals mercury processing plant. The focus will be on sediments, water, terrestrial plants, and aquatic weeds such as the prominent water hyacinth weeds in the nearby water systems of Mngceweni River, Umgeni River, and Inanda Dam as a bio-indicator and phytoremediator for mercury pollution. Samples will be collected in spring around October when the condition is favourable for microbial activity to methylate mercury incorporated in sediments and blooming season for some aquatic weeds, particularly water hyacinth. Samples of soil, sediment, water, terrestrial plant, and aquatic weed will be collected per sample site from the point of source (Thor Chemicals), Mngceweni River, Umgeni River, and the Inanda Dam. One-way analysis of variance (ANOVA) tests will be conducted to determine any significant differences in the Hg concentration among all sampling sites, followed by Least Significant Difference post hoc test to determine if mercury contamination varies with the gradient distance from the source point of pollution. The flow injection atomic spectrometry (FIAS) analysis will also be used to compare the mercury sequestration between the different plant tissues (roots and stems). The principal component analysis is also envisaged for use to determine the relationship between the source of mercury pollution and any of the sampling points (Umgeni and Mngceweni Rivers and the Inanda Dam). All the Hg values will be expressed in µg/L or µg/g in order to compare the result with the previous studies and regulatory standards. Sediments are expected to have relatively higher levels of Hg compared to the soils, and aquatic macrophytes, water hyacinth weeds are expected to accumulate a higher concentration of mercury than terrestrial plants and crops.

Keywords: mercury, phytoremediation, Thor chemicals, water hyacinth

Procedia PDF Downloads 221
37 International Trade, Manufacturing and Employment: The First Two Decades of South African Democracy

Authors: Phillip F. Blaauw, Anna M. Pretorius

Abstract:

South Africa re-entered the international economy in the early 1990s, after Apartheid, at a time when globalisation was gathering momentum. Globalisation led to a more open economy, increased export volumes and a changed export mix. Manufacturing goods gained ground relative to mining products. After 21 years of democracy, South African researchers and policymakers need to evaluate the impact of international trade on the level of employment and compensation of employees in the South African manufacturing industry. This is important given the consistent and high levels of unemployment in South Africa. This paper has this evaluation as its aim. Two complimenting approaches are utilised. The 27 sub divisions of the South African manufacturing industry are classified according to capital/labour ratios. Possible trends in employment levels and employee compensation for these categories are then identified when comparing levels in 1995 to those in 2014. The supplementing empirical approach is cross-sectional and panel data regressions for the same period. The aim of the regression analysis is to explain the observed changes in employment and employee compensation levels between 1995 and 2014. The first part of the empirical approach revealed that over the 20-year period the intermediate capital intensive, labour intensive an ultra-labour intensive manufacturing industries all showed massive declines in overall employment. Only three of the 19 industries for these classifications showed marginal overall employment gains. The only meaningful gains were recorded in three of the eight capital intensive manufacturing industries. The overall performance of the South African manufacturing industry is therefore dismal at best. This scenario plays itself out for the skilled section of the intermediate capital intensive, labour intensive an ultra-labour intensive manufacturing industries as well. 18 out of the 19 industries displayed declines even for the skilled section of the labour force. The formal regression analysis supplements the above results. Real production growth is a statistically significant (95 per cent confidence level) explanatory variable of the overall employment level for the period under consideration, albeit with a small positive coefficient. The variables with the most significant negative relationship with changes in overall employment were the dummy variables for intermediate capital intensive and labour intensive manufacturing goods. Disaggregating overall changes in employment further in terms of skill levels revealed that skilled employment in particular responded negatively to increases in the ratio between imported and local inputs for manufacturing. The dummy variable for the labour intensive sectors remained negative and statistically significant, indicating that the labour intensive sectors of South African manufacturing remain vulnerable to the loss of employment opportunities. Whereas the first period (1995 to 2001) after the opening of the South African economy brought positive changes for skilled employment, continued increases in imported inputs displaced some of the skilled labour as well, putting further pressure on the South African economy with already high and persistent unemployment levels. Given the negative for the world commodity cycle and a stagnant local manufacturing sector, the challenge for policymakers is getting even more pronounced after South Africa’s political coming of age.

Keywords: capital/labour ratios, employment, employee compensation, manufacturing

Procedia PDF Downloads 219
36 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 164
35 Leveraging Advanced Technologies and Data to Eliminate Abandoned, Lost, or Otherwise Discarded Fishing Gear and Derelict Fishing Gear

Authors: Grant Bifolchi

Abstract:

As global environmental problems continue to have highly adverse effects, finding long-term, sustainable solutions to combat ecological distress are of growing paramount concern. Ghost Gear—also known as abandoned, lost or otherwise discarded fishing gear (ALDFG) and derelict fishing gear (DFG)—represents one of the greatest threats to the world’s oceans, posing a significant hazard to human health, livelihoods, and global food security. In fact, according to the UN Food and Agriculture Organization (FAO), abandoned, lost and discarded fishing gear represents approximately 10% of marine debris by volume. Around the world, many governments, governmental and non-profit organizations are doing their best to manage the reporting and retrieval of nets, lines, ropes, traps, floats and more from their respective bodies of water. However, these organizations’ ability to effectively manage files and documents about the environmental problem further complicates matters. In Ghost Gear monitoring and management, organizations face additional complexities. Whether it’s data ingest, industry regulations and standards, garnering actionable insights into the location, security, and management of data, or the application of enforcement due to disparate data—all of these factors are placing massive strains on organizations struggling to save the planet from the dangers of Ghost Gear. In this 90-minute educational session, globally recognized Ghost Gear technology expert Grant Bifolchi CET, BBA, Bcom, will provide real-world insight into how governments currently manage Ghost Gear and the technology that can accelerate success in combatting ALDFG and DFG. In this session, attendees will learn how to: • Identify specific technologies to solve the ingest and management of Ghost Gear data categories, including type, geo-location, size, ownership, regional assignment, collection and disposal. • Provide enhanced access to authorities, fisheries, independent fishing vessels, individuals, etc., while securely controlling confidential and privileged data to globally recognized standards. • Create and maintain processing accuracy to effectively track ALDFG/DFG reporting progress—including acknowledging receipt of the report and sharing it with all pertinent stakeholders to ensure approvals are secured. • Enable and utilize Business Intelligence (BI) and Analytics to store and analyze data to optimize organizational performance, maintain anytime-visibility of report status, user accountability, scheduling, management, and foster governmental transparency. • Maintain Compliance Reporting through highly defined, detailed and automated reports—enabling all stakeholders to share critical insights with internal colleagues, regulatory agencies, and national and international partners.

Keywords: ghost gear, ALDFG, DFG, abandoned, lost or otherwise discarded fishing gear, data, technology

Procedia PDF Downloads 94
34 Study of Secondary Metabolites of Sargassum Algae: Anticorrosive and Antibacterial Activities

Authors: Prescilla Lambert, Christophe Roos, Mounim Lebrini

Abstract:

For several years, the Caribbean islands and West Africa have had to deal with the massive arrival of the brown seaweed Sargassum. Overall, this macroalgae, which constitutes a habitat for a great diversity of marine organisms, is also an additional stress factor for the marine environment (e.g., coral reefs). In addition, the accumulation followed by the significant decomposition of the Sargassum spp. biomass on the coast leads to the release of toxic gases (H₂S and NH₃), which calls into question the functioning of the economic, health and tourist life of the island and the other interested territories. Originally, these algae are formed by the eutrophication of the oceans accentuated by global warming. Unfortunately, scientists predict a significant recurrence of these Sargassum strandings for years to come. It is therefore more than necessary to find solutions by putting in place a sustainable management plan for this phenomenon. Martinique, a small island in the Caribbean arc, is one of the many areas impacted by Sargassum seaweed strandings. Since 2011, there has been a constant increase in the degradation of the materials present in this region, largely due to toxic/corrosive gases released by the algae decomposition. In order to protect the structures and the vulnerable building materials while limiting the use of synthetic/petroleum based molecules as much as possible, research is being conducted on molecules of natural origin. Thus, thanks to the chemical composition, which comprise molecules with interesting properties, algae such as Sargassum could potentially help to solve many issues. Therefore, this study focuses on the green extraction and characterization of molecules from the species Sargassum fluitans and Sargassum natans present in Martinique. The secondary metabolites found in these extracts showed variability in yield rates due to local climatic conditions. The tests carried out shed light on the anticorrosive and antibacterial potential of the algae. These extracts can thus be described as natural inhibitors. The effect of variation in inhibitor concentrations was tested in electrochemistry using electrochemical impedance spectroscopy and polarization curves. The analysis of electrochemical results obtained by direct immersion in the extracts and self-assembled molecular layers (SAMs) for Sargassum fluitans III, Sargassum natans I and VIII species was conclusive in acid and alkaline environments. The excellent results obtained reveal an inhibitory efficacy of 88% at 50mg/L for the crude extract of Sargassum fluitans III and efficacies greater than 97% for the chemical families of Sargassum fluitans III. Similarly, microbiological tests also suggest a bactericidal character. Results for Sargassum fluitans III crude extract show a minimum inhibitory concentration (MIC) of 0.005 mg/mL on Gram-negative bacteria and a MIC greater than 0.6 mg/mL on Gram-positive bacteria. These results make it possible to consider the management of local and international issues while valuing a biomass rich in biodegradable molecules. The next step in this study will therefore be the evaluation of the toxicity of Sargassum spp..

Keywords: Sargassum, secondary metabolites, anticorrosive, antibacterial, natural inhibitors

Procedia PDF Downloads 70
33 Small Town Big Urban Issues the Case of Kiryat Ono, Israel

Authors: Ruth Shapira

Abstract:

Introduction: The rapid urbanization of the last century confronts planners, regulatory bodies, developers and most of all – the public with seemingly unsolved conflicts regarding values, capital, and wellbeing of the built and un-built urban space. This is reflected in the quality of the urban form and life which has known no significant progress in the last 2-3 decades despite the on-growing urban population. It is the objective of this paper to analyze some of these fundamental issues through the case study of a relatively small town in the center of Israel (Kiryat-Ono, 100,000 inhabitants), unfold the deep structure of qualities versus disruptors, present some cure that we have developed to bridge over and humbly suggest a practice that may be generic for similar cases. Basic Methodologies: The OBJECT, the town of Kiryat Ono, shall be experimented upon in a series of four action processes: De-composition, Re-composition, the Centering process and, finally, Controlled Structural Disintegration. Each stage will be based on facts, analysis of previous multidisciplinary interventions on various layers – and the inevitable reaction of the OBJECT, leading to the conclusion based on innovative theoretical and practical methods that we have developed and that we believe are proper for the open ended network, setting the rules for the contemporary urban society to cluster by. The Study: Kiryat Ono, was founded 70 years ago as an agricultural settlement and rapidly turned into an urban entity. In spite the massive intensification, the original DNA of the old small town was still deeply embedded, mostly in the quality of the public space and in the sense of clustered communities. In the past 20 years, the recent demand for housing has been addressed to on the national level with recent master plans and urban regeneration policies mostly encouraging individual economic initiatives. Unfortunately, due to the obsolete existing planning platform the present urban renewal is characterized by pressure of developers, a dramatic change in building scale and widespread disintegration of the existing urban and social tissue. Our office was commissioned to conceptualize two master plans for the two contradictory processes of Kiryat Ono’s future: intensification and conservation. Following a comprehensive investigation into the deep structures and qualities of the existing town, we developed a new vocabulary of conservation terms thus redefying the sense of PLACE. The main challenge was to create master plans that should offer a regulatory basis to the accelerated and sporadic development providing for the public good and preserving the characteristics of the PLACE consisting of a tool box of design guidelines that will have the ability to reorganize space along the time axis in a coherent way. In Conclusion: The system of rules that we have developed can generate endless possible patterns making sure that at each implementation fragment an event is created, and a better place is revealed. It takes time and perseverance but it seems to be the way to provide a healthy framework for the accelerated urbanization of our chaotic present.

Keywords: housing, architecture, urban qualities, urban regeneration, conservation, intensification

Procedia PDF Downloads 361
32 Investigating the Application of Composting for Phosphorous Recovery from Alum Precipitated and Ferric Precipitated Sludge

Authors: Saba Vahedi, Qiuyan Yuan

Abstract:

A vast majority of small municipalities and First Nations communities in Manitoba operate facultative or aerated lagoons for wastewater treatment, and most of them use Ferric Chloride (FeCl3) or alum (usually in the form of Al2(SO4)3 ·18H2O) as coagulant for phosphorous removal. The insoluble particles that form during the coagulation process result in a massive volume of sludge which is typically left in the lagoons. Therefore, phosphorous, which is a valuable nutrient, is lost in the process. In this project, the complete recovery of phosphorous from the sludge that is produced in the process of phosphorous removal from wastewater lagoons by using a controlled composting process is investigated. Objective The main objective of this project is to compost alum precipitated sludge that is produced in the process of phosphorous removal in wastewater treatment lagoons in Manitoba. The ultimate goal is to have a product that will meet the characteristics of Class A biosolids in Canada. A number of parameters, including the bioavailability of nutrients in the composted sludge and the toxicity of the sludge, will be evaluated Investigating the bioavailability of phosphorous in the final compost product. The compost will be used as a source of P compared to a commercial fertilizer (monoammonium phosphate MAP) Experimental setup Three different batches of composts piles have been run using the Alum sludge and Ferric sludge. The alum phosphate sludge was collected from an innovative phosphorous removal system at the RM of Taché . The collected sludge was sent to ALS laboratory to analyze the C/N ratio, TP, TN, TC, TAl, moisture contents, pH, and metals concentrations. Wood chips as the bulking agent were collected at the RM of Taché landfill The sludge in the three piles were mixed with 3x dry woodchips. The mixture was turned every week manually. The temperature, the moisture content, and pH were monitored twice a week. The temperature of the mixtures was remained above 55 °C for two weeks. Each pile was kept for ten weeks to get mature. The final products have been applied to two different plants to investigate the bioavailability of P in the compost product as well as the toxicity of the product. The two types of plants were selected based on their sensitivity, growth time, and their compatibility with the Manitoba climate, which are Canola, and switchgrass. The pots are weighed and watered every day to replenish moisture lost by evapotranspiration. A control experiment is also conducted by using topsoil soil and chemical fertilizers (MAP). The experiment will be carried out in a growth room maintained at a day/night temperature regime of 25/15°C, a relative humidity of 60%, and a corresponding photoperiod of 16 h. A total of three cropping (seeding to harvest) cycles need be completed, with each cycle at 50 d in duration. Harvested biomass must be weighed and oven-dried for 72 h at 60°C. The first cycle of growth Canola and Switchgrasses in the alum sludge compost, harvested at the day 50, oven dried, chopped into bits and fine ground in a mill grinder (< 0.2mm), and digested using the wet oxidation method in which plant tissue samples were digested with H2SO4 (99.7%) and H2O2 (30%) in an acid block digester. The digested plant samples need to be analyzed to measure the amount of total phosphorus.

Keywords: wastewater treatment, phosphorus removal, composting alum sludge, bioavailibility of pohosphorus

Procedia PDF Downloads 70
31 Strategies for Drought Adpatation and Mitigation via Wastewater Management

Authors: Simrat Kaur, Fatema Diwan, Brad Reddersen

Abstract:

The unsustainable and injudicious use of natural renewable resources beyond the self-replenishment limits of our planet has proved catastrophic. Most of the Earth’s resources, including land, water, minerals, and biodiversity, have been overexploited. Owing to this, there is a steep rise in the global events of natural calamities of contrasting nature, such as torrential rains, storms, heat waves, rising sea levels, and megadroughts. These are all interconnected through common elements, namely oceanic currents and land’s the green cover. The deforestation fueled by the ‘economic elites’ or the global players have already cleared massive forests and ecological biomes in every region of the globe, including the Amazon. These were the natural carbon sinks prevailing and performing CO2 sequestration for millions of years. The forest biomes have been turned into mono cultivation farms to produce feedstock crops such as soybean, maize, and sugarcane; which are one of the biggest green house gas emitters. Such unsustainable agriculture practices only provide feedstock for livestock and food processing industries with huge carbon and water footprints. These are two main factors that have ‘cause and effect’ relationships in the context of climate change. In contrast to organic and sustainable farming, the mono-cultivation practices to produce food, fuel, and feedstock using chemicals devoid of the soil of its fertility, abstract surface, and ground waters beyond the limits of replenishment, emit green house gases, and destroy biodiversity. There are numerous cases across the planet where due to overuse; the levels of surface water reservoir such as the Lake Mead in Southwestern USA and ground water such as in Punjab, India, have deeply shrunk. Unlike the rain fed food production system on which the poor communities of the world relies; the blue water (surface and ground water) dependent mono-cropping for industrial and processed food create water deficit which put the burden on the domestic users. Excessive abstraction of both surface and ground waters for high water demanding feedstock (soybean, maize, sugarcane), cereal crops (wheat, rice), and cash crops (cotton) have a dual and synergistic impact on the global green house gas emissions and prevalence of megadroughts. Both these factors have elevated global temperatures, which caused cascading events such as soil water deficits, flash fires, and unprecedented burning of the woods, creating megafires in multiple continents, namely USA, South America, Europe, and Australia. Therefore, it is imperative to reduce the green and blue water footprints of agriculture and industrial sectors through recycling of black and gray waters. This paper explores various opportunities for successful implementation of wastewater management for drought preparedness in high risk communities.

Keywords: wastewater, drought, biodiversity, water footprint, nutrient recovery, algae

Procedia PDF Downloads 100
30 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 339
29 Smart Mobility Planning Applications in Meeting the Needs of the Urbanization Growth

Authors: Caroline Atef Shoukry Tadros

Abstract:

Massive Urbanization growth threatens the sustainability of cities and the quality of city life. This raised the need for an alternate model of sustainability, so we need to plan the future cities in a smarter way with smarter mobility. Smart Mobility planning applications are solutions that use digital technologies and infrastructure advances to improve the efficiency, sustainability, and inclusiveness of urban transportation systems. They can contribute to meeting the needs of Urbanization growth by addressing the challenges of traffic congestion, pollution, accessibility, and safety in cities. Some example of a Smart Mobility planning application are Mobility-as-a-service: This is a service that integrates different transport modes, such as public transport, shared mobility, and active mobility, into a single platform that allows users to plan, book, and pay for their trips. This can reduce the reliance on private cars, optimize the use of existing infrastructure, and provide more choices and convenience for travelers. MaaS Global is a company that offers mobility-as-a-service solutions in several cities around the world. Traffic flow optimization: This is a solution that uses data analytics, artificial intelligence, and sensors to monitor and manage traffic conditions in real-time. This can reduce congestion, emissions, and travel time, as well as improve road safety and user satisfaction. Waycare is a platform that leverages data from various sources, such as connected vehicles, mobile applications, and road cameras, to provide traffic management agencies with insights and recommendations to optimize traffic flow. Logistics optimization: This is a solution that uses smart algorithms, blockchain, and IoT to improve the efficiency and transparency of the delivery of goods and services in urban areas. This can reduce the costs, emissions, and delays associated with logistics, as well as enhance the customer experience and trust. ShipChain is a blockchain-based platform that connects shippers, carriers, and customers and provides end-to-end visibility and traceability of the shipments. Autonomous vehicles: This is a solution that uses advanced sensors, software, and communication systems to enable vehicles to operate without human intervention. This can improve the safety, accessibility, and productivity of transportation, as well as reduce the need for parking space and infrastructure maintenance. Waymo is a company that develops and operates autonomous vehicles for various purposes, such as ride-hailing, delivery, and trucking. These are some of the ways that Smart Mobility planning applications can contribute to meeting the needs of the Urbanization growth. However, there are also various opportunities and challenges related to the implementation and adoption of these solutions, such as the regulatory, ethical, social, and technical aspects. Therefore, it is important to consider the specific context and needs of each city and its stakeholders when designing and deploying Smart Mobility planning applications.

Keywords: smart mobility planning, smart mobility applications, smart mobility techniques, smart mobility tools, smart transportation, smart cities, urbanization growth, future smart cities, intelligent cities, ICT information and communications technologies, IoT internet of things, sensors, lidar, digital twin, ai artificial intelligence, AR augmented reality, VR virtual reality, robotics, cps cyber physical systems, citizens design science

Procedia PDF Downloads 73
28 Synthetic Method of Contextual Knowledge Extraction

Authors: Olga Kononova, Sergey Lyapin

Abstract:

Global information society requirements are transparency and reliability of data, as well as ability to manage information resources independently; particularly to search, to analyze, to evaluate information, thereby obtaining new expertise. Moreover, it is satisfying the society information needs that increases the efficiency of the enterprise management and public administration. The study of structurally organized thematic and semantic contexts of different types, automatically extracted from unstructured data, is one of the important tasks for the application of information technologies in education, science, culture, governance and business. The objectives of this study are the contextual knowledge typologization, selection or creation of effective tools for extracting and analyzing contextual knowledge. Explication of various kinds and forms of the contextual knowledge involves the development and use full-text search information systems. For the implementation purposes, the authors use an e-library 'Humanitariana' services such as the contextual search, different types of queries (paragraph-oriented query, frequency-ranked query), automatic extraction of knowledge from the scientific texts. The multifunctional e-library «Humanitariana» is realized in the Internet-architecture in WWS-configuration (Web-browser / Web-server / SQL-server). Advantage of use 'Humanitariana' is in the possibility of combining the resources of several organizations. Scholars and research groups may work in a local network mode and in distributed IT environments with ability to appeal to resources of any participating organizations servers. Paper discusses some specific cases of the contextual knowledge explication with the use of the e-library services and focuses on possibilities of new types of the contextual knowledge. Experimental research base are science texts about 'e-government' and 'computer games'. An analysis of the subject-themed texts trends allowed to propose the content analysis methodology, that combines a full-text search with automatic construction of 'terminogramma' and expert analysis of the selected contexts. 'Terminogramma' is made out as a table that contains a column with a frequency-ranked list of words (nouns), as well as columns with an indication of the absolute frequency (number) and the relative frequency of occurrence of the word (in %% ppm). The analysis of 'e-government' materials showed, that the state takes a dominant position in the processes of the electronic interaction between the authorities and society in modern Russia. The media credited the main role in these processes to the government, which provided public services through specialized portals. Factor analysis revealed two factors statistically describing the used terms: human interaction (the user) and the state (government, processes organizer); interaction management (public officer, processes performer) and technology (infrastructure). Isolation of these factors will lead to changes in the model of electronic interaction between government and society. In this study, the dominant social problems and the prevalence of different categories of subjects of computer gaming in science papers from 2005 to 2015 were identified. Therefore, there is an evident identification of several types of contextual knowledge: micro context; macro context; dynamic context; thematic collection of queries (interactive contextual knowledge expanding a composition of e-library information resources); multimodal context (functional integration of iconographic and full-text resources through hybrid quasi-semantic algorithm of search). Further studies can be pursued both in terms of expanding the resource base on which they are held, and in terms of the development of appropriate tools.

Keywords: contextual knowledge, contextual search, e-library services, frequency-ranked query, paragraph-oriented query, technologies of the contextual knowledge extraction

Procedia PDF Downloads 357
27 Distribution System Modelling: A Holistic Approach for Harmonic Studies

Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet

Abstract:

The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.

Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling

Procedia PDF Downloads 157
26 Rabies Free Pakistan - Eliminating Rabies Through One Health Approach

Authors: Anzal Abbas Jaffari, Wajiha Javed, Naseem Salahuddin

Abstract:

Rationale: Rabies, a vaccine preventable disease, continues to be a critical public health issue as it kills around 2000-5000 people annually in Pakistan. Along with the disease spread among animals, the dog population remains a victim of brutal culling practices by the local authorities, which adversely affects ecosystem (sinking of poison in the soil – affecting vegetation & contaminating water) and the disease spread. The dog population has been exponentially rising primarily because a lack of a consolidated nationwide Animal Birth Control program and awareness among the local communities in general and children in particular. This is reflected in Pakistan’s low SARE score - 1.5, which makes the country trails behind other developing countries like Bangladesh (2.5) and Philippines (3.5).According to an estimate, the province of Sindh alone is home to almost 2.5 million dogs. The clustering of dogs in Peri-Urban areas and inner cities localities leads to an increase of reported dog bite cases in these areas specifically. Objective: Rabies Free Pakistan (RFP), which is a joint venture of Getz Pharma Private Limited and Indus Hospital & Health Network (IHHN); it was established in 2018 to eliminate Rabies from Pakistan by 2030 using the One Health Approach. Methodology: The RFP team is actively working on advocacy and policy front with both the Federal & Provincial government to ensure that all stakeholders currently involved in dog culling in Pakistan have a paradigm shift towards humane methods of vaccination and ABC. Along with the federal government, RFP aims to declare Rabies as a notifiable disease. Whereas RFP closely works with the provincial government of Sindh to initiate a province wide Rabies Control Program.RFP program follows international standards and WHO approved protocols for this program in Pakistan.RFP team has achieved various milestones in the fight against Rabies after successfully scaling up project operations and has vaccinated more than 30,000 dogs and neutered around 7,000 dogs since 2018. Recommendations: Effective implementation of Rabies program (MDV and ABC) requires a concentrated effort to address a variety of structural and policy challenges. This essentially demands a massive shift in the attitude of individuals towards rabies. The two most significant challenges in implementing a standard policy at the structural level are lack of institutional capacity, shortage of vaccine, and absence of inter-departmental coordination among major stakeholders: federal government, provincial ministry of health, livestock, and local bodies (including local councils). The lack of capacity in health care workers to treat dog bite cases emerges as a critical challenge at the clinical level. Conclusion: Pakistan can learn from the successful international models of Sri Lanka and Mexico as they adopted the One Health Approach to eliminate rabies like RFP. The WHO advised One Health approach provides the policymakers with an interactive and cross-sectoral guide, which involves all the essential elements of the eco system (including animals, humans, and other components).

Keywords: animal birth control, dog population, mass dog vaccination, one health, rabies elimination

Procedia PDF Downloads 179
25 Environmental Life Cycle Assessment of Circular, Bio-Based and Industrialized Building Envelope Systems

Authors: N. Cihan KayaçEtin, Stijn Verdoodt, Alexis Versele

Abstract:

The construction industry is accounted for one-third of all waste generated in the European Union (EU) countries. The Circular Economy Action Plan of the EU aims to tackle this issue and aspires to enhance the sustainability of the construction industry by adopting more circular principles and bio-based material use. The Interreg Circular Bio-Based Construction Industry (CBCI) project was conceived to research how this adoption can be facilitated. For this purpose, an approach is developed that integrates technical, legal and social aspects and provides business models for circular designing and building with bio-based materials. In the scope of the project, the research outputs are to be displayed in a real-life setting by constructing a demo terraced single-family house, the living lab (LL) located in Ghent (Belgium). The realization of the LL is conducted in a step-wise approach that includes iterative processes for design, description, criteria definition and multi-criteria assessment of building components. The essence of the research lies within the exploratory approach to the state-of-art building envelope and technical systems options for achieving an optimum combination for a circular and bio-based construction. For this purpose, nine preliminary designs (PD) for building envelope are generated, which consist of three basic construction methods: masonry, lightweight steel construction and wood framing construction supplemented with bio-based construction methods like cross-laminated timber (CLT) and massive wood framing. A comparative analysis on the PDs was conducted by utilizing several complementary tools to assess the circularity. This paper focuses on the life cycle assessment (LCA) approach for evaluating the environmental impact of the LL Ghent. The adoption of an LCA methodology was considered critical for providing a comprehensive set of environmental indicators. The PDs were developed at the component level, in particular for the (i) inclined roof, (ii-iii) front and side façade, (iv) internal walls and (v-vi) floors. The assessment was conducted on two levels; component and building level. The options for each component were compared at the first iteration and then, the PDs as an assembly of components were further analyzed. The LCA was based on a functional unit of one square meter of each component and CEN indicators were utilized for impact assessment for a reference study period of 60 years. A total of 54 building components that are composed of 31 distinct materials were evaluated in the study. The results indicate that wood framing construction supplemented with bio-based construction methods performs environmentally better than the masonry or steel-construction options. An analysis on the correlation between the total weight of components and environmental impact was also conducted. It was seen that masonry structures display a high environmental impact and weight, steel structures display low weight but relatively high environmental impact and wooden framing construction display low weight and environmental impact. The study provided valuable outputs in two levels: (i) several improvement options at component level with substitution of materials with critical weight and/or impact per unit, (ii) feedback on environmental performance for the decision-making process during the design phase of a circular single family house.

Keywords: circular and bio-based materials, comparative analysis, life cycle assessment (LCA), living lab

Procedia PDF Downloads 182
24 Delicate Balance between Cardiac Stress and Protection: Role of Mitochondrial Proteins

Authors: Zuzana Tatarkova, Ivana Pilchova, Michal Cibulka, Martin Kolisek, Peter Racay, Peter Kaplan

Abstract:

Introduction: Normal functioning of mitochondria is crucial for cardiac performance. Mitochondria undergo mitophagy and biogenesis, and mitochondrial proteins are subject to extensive post-translational modifications. The state of mitochondrial homeostasis reflects overall cellular fitness and longevity. Perturbed mitochondria produce less ATP, release greater amounts of reactive molecules, and are more prone to apoptosis. Therefore mitochondrial turnover is an integral aspect of quality control in which dysfunctional mitochondria are selectively eliminated through mitophagy. Currently, the progressive deterioration of physiological functions is seen as accumulation of modified/damaged proteins with limiting regenerative ability and disturbance of such affected protein-protein communication throughout aging in myocardial cells. Methodologies: For our study was used immunohistochemistry, biochemical methods: spectrophotometry, western blotting, immunodetection as well as more sophisticated 2D electrophoresis and mass spectrometry for evaluation protein-protein interactions and specific post-translational modification. Results and Discussion: Mitochondrial stress response to reactive species was evaluated as electron transport chain (ETC) complexes, redox-active molecules, and their possible communication. Protein-protein interactions revealed a strong linkage between age and ETC protein subunits. Redox state was strongly affected in senescent mitochondria with shift in favor of more pro-oxidizing condition within cardiomyocytes. Acute myocardial ischemia and ischemia-reperfusion (IR) injury affected ETC complexes I, II and IV with no change in complex III. Ischemia induced decrease in total antioxidant capacity, MnSOD, GSH and catalase activity with recovery in some extent during reperfusion. While MnSOD protein content was higher in IR group, activity returned to 95% of control. Nitric oxide is one of the biological molecules that can out compete MnSOD for superoxide and produce peroxynitrite. This process is faster than dismutation and led to the 10-fold higher production of nitrotyrosine after IR injury in adult with higher protection in senescent ones. 2D protein profiling revealed 140 mitochondrial proteins, 12 of them with significant changes after IR injury and 36 individual nitrotyrosine-modified proteins further identified by mass spectrometry. Linking these two groups, 5 proteins were altered after IR as well as nitrated, but only one showed massive nitration per lowering content of protein after IR injury in adult. Conclusions: Senescent cells have greater proportion of protein content, which might be modulated by several post-translational modifications. If these protein modifications are connected to functional consequences and protein-protein interactions are revealed, link may lead to the solution. Assume all together, dysfunctional proteostasis can play a causative role and restoration of protein homeostasis machinery is protective against aging and possibly age-related disorders. This work was supported by the project VEGA 1/0018/18 and by project 'Competence Center for Research and Development in the field of Diagnostics and Therapy of Oncological diseases', ITMS: 26220220153, co-financed from EU sources.

Keywords: aging heart, mitochondria, proteomics, redox state

Procedia PDF Downloads 166
23 Mean Nutrient Intake and Nutrient Adequacy Ratio in India: Occurrence of Hidden Hunger in Indians

Authors: Abha Gupta, Deepak K. Mishra

Abstract:

The focus of food security studies in India has been on the adequacy of calories and its linkage with poverty level. India currently being undergoing a massive demographic and epidemiological transition has demonstrated a decline in average physical activity with improved mechanization and urbanization. Food consumption pattern is also changing with decreasing intake of coarse cereals and a marginal increase in the consumption of fruits, vegetables and meat products resulting into a nutrition transition in the country. However, deficiency of essential micronutrients such as vitamins and minerals is rampant despite their growing importance in fighting back with lifestyle and other modern diseases. The calorie driven studies can hardly tackle the complex problem of malnutrition. This paper fills these research lacuna and analyses mean intake of different major and micro-nutrients among different socio-economic groups and adequacy of these nutrients from recommended dietary allowance. For the purpose, a cross-sectional survey covering 304 households selected through proportional stratified random sampling was conducted in six villages of Aligarh district of the state of Uttar Pradesh, India. Data on quantity consumed of 74 food items grouped into 10 food categories with a recall period of seven days was collected from the households and converted into energy, protein, fat, carbohydrate, calcium, iron, thiamine, riboflavin, niacin and vitamin C using standard guidelines of National Institute of Nutrition. These converted nutrients were compared with recommended norms given by National Nutrition Monitoring Bureau. Per capita nutrient adequacy was calculated by dividing mean nutrient intake by the household size and then by comparing it with recommended norm. Findings demonstrate that source of both macro and micro-nutrients are mainly cereals followed by milk, edible oil and sugar items. Share of meat in providing essential nutrients is very low due to vegetarian diet. Vegetables, pulses, nuts, fruits and dry fruits are a poor source for most of the nutrients. Further analysis evinces that intake of most of the nutrients is higher than the recommended norm. Riboflavin is the only vitamin whose intake is less than the standard norm. Poor group, labour, small farmers, Muslims, scheduled caste demonstrate comparatively lower intake of all nutrients than their counterpart groups, though, they get enough macro and micro-nutrients significantly higher than the norm. One of the major reasons for higher intake of most of the nutrients across all socio-economic groups is higher consumption of monotonous diet based on cereals and milk. Most of the nutrients get their major share from cereals particularly wheat and milk intake. It can be concluded from the analysis that although there is adequate intake of most of the nutrients in the diet of rural population yet their source is mainly cereals and milk products depicting a monotonous diet. Hence, more efforts are needed to diversify the diet by giving more focus to the production of other food items particularly fruits, vegetables and pulse products. Awareness among the population, more accessibility and incorporating food items other than cereals in government social safety programmes are other measures to improve food security in India.

Keywords: hidden hunger, India, nutrients, recommended norm

Procedia PDF Downloads 314
22 Challenges for Reconstruction: A Case Study from 2015 Gorkha, Nepal Earthquake

Authors: Hari K. Adhikari, Keshab Sharma, K. C. Apil

Abstract:

The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 hit the central region of Nepal on April 25, 2015; with the epicenter about 77 km northwest of Kathmandu Valley. This paper aims to explore challenges of reconstruction in the rural earthquake-stricken areas of Nepal. The Gorkha earthquake on April 25, 2015, has significantly affected the livelihood of people and overall economy in Nepal, causing severe damage and destruction in central Nepal including nation’s capital. A larger part of the earthquake affected area is difficult to access with rugged terrain and scattered settlements, which posed unique challenges and efforts on a massive scale reconstruction and rehabilitation. 800 thousand buildings were affected leaving 8 million people homeless. Challenge of reconstruction of optimum 800 thousand houses is arduous for Nepal in the background of its turmoil political scenario and weak governance. With significant actors involved in the reconstruction process, no appreciable relief has reached to the ground, which is reflected over the frustration of affected people. The 2015 Gorkha earthquake is one of most devastating disasters in the modern history of Nepal. Best of our knowledge, there is no comprehensive study on reconstruction after disasters in modern Nepal, which integrates the necessary information to deal with challenges and opportunities of reconstructions. The study was conducted using qualitative content analysis method. Thirty engineers and ten social mobilizes working for reconstruction and more than hundreds local social workers, local party leaders, and earthquake victims were selected arbitrarily. Information was collected through semi-structured interviews and open-ended questions, focus group discussions, and field notes, with no previous assumption. Author also reviewed literature and document reviews covering academic and practitioner studies on challenges of reconstruction after earthquake in developing countries such as 2001 Gujarat earthquake, 2005 Kashmir earthquake, 2003 Bam earthquake and 2010 Haiti earthquake; which have very similar building typologies, economic, political, geographical, and geological conditions with Nepal. Secondary data was collected from reports, action plans, and reflection papers of governmental entities, non-governmental organizations, private sector businesses, and the online news. This study concludes that inaccessibility, absence of local government, weak governance, weak infrastructures, lack of preparedness, knowledge gap and manpower shortage, etc. are the key challenges of the reconstruction after 2015 earthquake in Nepal. After scrutinizing different challenges and issues, study counsels that good governance, integrated information, addressing technical issues, public participation along with short term and long term strategies to tackle with technical issues are some crucial factors for timely and quality reconstruction in context of Nepal. Sample collected for this study is relatively small sample size and may not be fully representative of the stakeholders involved in reconstruction. However, the key findings of this study are ones that need to be recognized by academics, governments, and implementation agencies, and considered in the implementation of post-disaster reconstruction program in developing countries.

Keywords: Gorkha earthquake, reconstruction, challenges, policy

Procedia PDF Downloads 407
21 Vertebral Artery Dissection Complicating Pregnancy and Puerperium: Case Report and Review of the Literature

Authors: N. Reza Pour, S. Chuah, T. Vo

Abstract:

Background: Vertebral artery dissection (VAD) is a rare complication of pregnancy. It can occur spontaneously or following a traumatic event. The pathogenesis is unclear. Predisposing factors include chronic hypertension, Marfan’s syndrome, fibromuscular dysplasia, vasculitis and cystic medial necrosis. Physiological changes of pregnancy have also been proposed as potential mechanisms of injury to the vessel wall. The clinical presentation varies and it can present as a headache, neck pain, diplopia, transient ischaemic attack, or an ischemic stroke. Isolated cases of VAD in pregnancy and puerperium have been reported in the literature. One case was found to have posterior circulation stroke as a result of bilateral VAD and labour was induced at 37 weeks gestation for preeclampsia. Another patient at 38 weeks with severe neck pain that persisted after induction for elevated blood pressure and arteriography showed right VAD postpartum. A single case of lethal VAD in pregnancy with subsequent massive subarachnoid haemorrhage has been reported which was confirmed by the autopsy. Case Presentation: We report two cases of vertebral artery dissection in pregnancy. The first patient was a 32-year-old primigravida presented at the 38th week of pregnancy with the onset of early labour and blood pressure (BP) of 130/70 on arrival. After 2 hours, the patient developed a severe headache with blurry vision and BP was 238/120. Despite treatment with an intravenous antihypertensive, she had eclamptic fit. Magnesium solfate was started and Emergency Caesarean Section was performed under the general anaesthesia. On the second day after the operation, she developed left-sided neck pain. Magnetic Resonance Imaging (MRI) angiography confirmed a short segment left vertebral artery dissection at the level of C3. The patient was treated with aspirin and remained stable without any neurological deficit. The second patient was a 33-year-old primigavida who was admitted to the hospital at 36 weeks gestation with BP of 155/105, constant headache and visual disturbances. She was medicated with an oral antihypertensive agent. On day 4, she complained of right-sided neck pain. MRI angiogram revealed a short segment dissection of the right vertebral artery at the C2-3 level. Pregnancy was terminated on the same day with emergency Caesarean Section and anticoagulation was started subsequently. Post-operative recovery was complicated by rectus sheath haematoma requiring evacuation. She was discharged home on Aspirin without any neurological sequelae. Conclusion: Because of collateral circulation, unilateral vertebral artery dissections may go unrecognized and may be more common than suspected. The outcome for most patients is benign, reflecting the adequacy of the collateral circulation in young patients. Spontaneous VAD is usually treated with anticoagulation or antiplatelet therapy for a minimum of 3-6 months to prevent future ischaemic events, allowing the dissection to heal on its own. We had two cases of VAD in the context of hypertensive disorders of pregnancy with an acceptable outcome. A high level of vigilance is required particularly with preeclamptic patients presenting with head/neck pain to allow an early diagnosis. This is as we hypothesize, early and aggressive management of vertebral artery dissection may potentially prevent further complications.

Keywords: eclampsia, preeclampsia, pregnancy, Vertebral Artery Dissection

Procedia PDF Downloads 275
20 Academic Achievement in Argentinean College Students: Major Findings in Psychological Assessment

Authors: F. Uriel, M. M. Fernandez Liporace

Abstract:

In the last decade, academic achievement in higher education has become a topic of agenda in Argentina, regarding the high figures of adjustment problems, academic failure and dropout, and the low graduation rates in the context of massive classes and traditional teaching methods. Psychological variables, such as perceived social support, academic motivation and learning styles and strategies have much to offer since their measurement by tests allows a proper diagnose of their influence on academic achievement. Framed in a major research, several studies analysed multiple samples, totalizing 5135 students attending Argentinean public universities. The first goal was aimed at the identification of statistically significant differences in psychological variables -perceived social support, learning styles, learning strategies, and academic motivation- by age, gender, and degree of academic advance (freshmen versus sophomores). Thus, an inferential group differences study for each psychological dependent variable was developed by means of student’s T tests, given the features of data distribution. The second goal, aimed at examining associations between the four psychological variables on the one hand, and academic achievement on the other, was responded by correlational studies, calculating Pearson’s coefficients, employing grades as the quantitative indicator of academic achievement. The positive and significant results that were obtained led to the formulation of different predictive models of academic achievement which had to be tested in terms of adjustment and predictive power. These models took the four psychological variables above mentioned as predictors, using regression equations, examining predictors individually, in groups of two, and together, analysing indirect effects as well, and adding the degree of academic advance and gender, which had shown their importance within the first goal’s findings. The most relevant results were: first, gender showed no influence on any dependent variable. Second, only good achievers perceived high social support from teachers, and male students were prone to perceive less social support. Third, freshmen exhibited a pragmatic learning style, preferring unstructured environments, the use of examples and simultaneous-visual processing in learning, whereas sophomores manifest an assimilative learning style, choosing sequential and analytic processing modes. Despite these features, freshmen have to deal with abstract contents and sophomores, with practical learning situations due to study programs in force. Fifth, no differences in academic motivation were found between freshmen and sophomores. However, the latter employ a higher number of more efficient learning strategies. Sixth, freshmen low achievers lack intrinsic motivation. Seventh, models testing showed that social support, learning styles and academic motivation influence learning strategies, which affect academic achievement in freshmen, particularly males; only learning styles influence achievement in sophomores of both genders with direct effects. These findings led to conclude that educational psychologists, education specialists, teachers, and universities must plan urgent and major changes. These must be applied in renewed and better study programs, syllabi and classes, as well as tutoring and training systems. Such developments should be targeted to the support and empowerment of students in their academic pathways, and therefore to the upgrade of learning quality, especially in the case of freshmen, male freshmen, and low achievers.

Keywords: academic achievement, academic motivation, coping, learning strategies, learning styles, perceived social support

Procedia PDF Downloads 122
19 Chatbots vs. Websites: A Comparative Analysis Measuring User Experience and Emotions in Mobile Commerce

Authors: Stephan Boehm, Julia Engel, Judith Eisser

Abstract:

During the last decade communication in the Internet transformed from a broadcast to a conversational model by supporting more interactive features, enabling user generated content and introducing social media networks. Another important trend with a significant impact on electronic commerce is a massive usage shift from desktop to mobile devices. However, a presentation of product- or service-related information accumulated on websites, micro pages or portals often remains the pivot and focal point of a customer journey. A more recent change of user behavior –especially in younger user groups and in Asia– is going along with the increasing adoption of messaging applications supporting almost real-time but asynchronous communication on mobile devices. Mobile apps of this type cannot only provide an alternative for traditional one-to-one communication on mobile devices like voice calls or short messaging service. Moreover, they can be used in mobile commerce as a new marketing and sales channel, e.g., for product promotions and direct marketing activities. This requires a new way of customer interaction compared to traditional mobile commerce activities and functionalities provided based on mobile web-sites. One option better aligned to the customer interaction in mes-saging apps are so-called chatbots. Chatbots are conversational programs or dialog systems simulating a text or voice based human interaction. They can be introduced in mobile messaging and social media apps by using rule- or artificial intelligence-based imple-mentations. In this context, a comparative analysis is conducted to examine the impact of using traditional websites or chatbots for promoting a product in an impulse purchase situation. The aim of this study is to measure the impact on the customers’ user experi-ence and emotions. The study is based on a random sample of about 60 smartphone users in the group of 20 to 30-year-olds. Participants are randomly assigned into two groups and participate in a traditional website or innovative chatbot based mobile com-merce scenario. The chatbot-based scenario is implemented by using a Wizard-of-Oz experimental approach for reasons of sim-plicity and to allow for more flexibility when simulating simple rule-based and more advanced artificial intelligence-based chatbot setups. A specific set of metrics is defined to measure and com-pare the user experience in both scenarios. It can be assumed, that users get more emotionally involved when interacting with a system simulating human communication behavior instead of browsing a mobile commerce website. For this reason, innovative face-tracking and analysis technology is used to derive feedback on the emotional status of the study participants while interacting with the website or the chatbot. This study is a work in progress. The results will provide first insights on the effects of chatbot usage on user experiences and emotions in mobile commerce environments. Based on the study findings basic requirements for a user-centered design and implementation of chatbot solutions for mobile com-merce can be derived. Moreover, first indications on situations where chatbots might be favorable in comparison to the usage of traditional website based mobile commerce can be identified.

Keywords: chatbots, emotions, mobile commerce, user experience, Wizard-of-Oz prototyping

Procedia PDF Downloads 458
18 Challenges, Practices, and Opportunities of Knowledge Management in Industrial Research Institutes: Lessons Learned from Flanders Make

Authors: Zhenmin Tao, Jasper De Smet, Koen Laurijssen, Jeroen Stuyts, Sonja Sioncke

Abstract:

Today, the quality of knowledge management (KM)become one of the underpinning factors in the success of an organization, as it determines the effectiveness of capitalizing the organization’s knowledge. Overall, KMin an organization consists of five aspects: (knowledge) creation, validation, presentation, distribution, and application. Among others, KM in research institutes is considered as the cornerstone as their activities cover all five aspects. Furthermore, KM in a research institute facilitates the steering committee to envision the future roadmap, identify knowledge gaps, and make decisions on future research directions. Likewise, KMis even more challenging in industrial research institutes. From a technical perspective, technology advancement in the past decades calls for combinations of breadth and depth in expertise that poses challenges in talent acquisition and, therefore, knowledge creation. From a regulatory perspective, the strict intellectual property protection from industry collaborators and/or the contractual agreements made by possible funding authoritiesform extra barriers to knowledge validation, presentation, and distribution. From a management perspective, seamless KM activities are only guaranteed by inter-disciplinary talents that combine technical background knowledge, management skills, and leadership, let alone international vision. From a financial perspective, the long feedback period of new knowledge, together with the massive upfront investment costs and low reusability of the fixed assets, lead to low RORC (return on research capital) that jeopardize KM practice. In this study, we aim to address the challenges, practices, and opportunitiesof KM in Flanders Make – a leading European research institute specialized in the manufacturing industry. In particular, the analyses encompass an internal KM project which involves functionalities ranging from management to technical domain experts. This wide range of functionalities provides comprehensive empirical evidence on the challenges and practices w.r.t.the abovementioned KMaspects. Then, we ground our analysis onto the critical dimensions ofKM–individuals, socio‐organizational processes, and technology. The analyses have three steps: First, we lay the foundation and define the environment of this study by briefing the KM roles played by different functionalities in Flanders Make. Second, we zoom in to the CoreLab MotionS where the KM project is located. In this step, given the technical domains covered by MotionS products, the challenges in KM will be addressed w.r.t. the five KM aspects and three critical dimensions. Third, by detailing the objectives, practices, results, and limitations of the MotionSKMproject, we justify the practices and opportunities derived in the execution ofKMw.r.t. the challenges addressed in the second step. The results of this study are twofold: First, a KM framework that consolidates past knowledge is developed. A library based on this framework can, therefore1) overlook past research output, 2) accelerate ongoing research activities, and 3) envision future research projects. Second, the challenges inKM on both individual (actions) level and socio-organizational level (e.g., interactions between individuals)are identified. By doing so, suggestions and guidelines will be provided in KM in the context of industrial research institute. To this end, the results in this study are reflected towards the findings in existing literature.

Keywords: technical knowledge management framework, industrial research institutes, individual knowledge management, socio-organizational knowledge management.

Procedia PDF Downloads 114
17 Facies, Diagenetic Analysis and Sequence Stratigraphy of Habib Rahi Formation Dwelling in the Vicinity of Jacobabad Khairpur High, Southern Indus Basin, Pakistan

Authors: Muhammad Haris, Syed Kamran Ali, Mubeen Islam, Tariq Mehmood, Faisal Shah

Abstract:

Jacobabad Khairpur High, part of a Sukkur rift zone, is the separating boundary between Central and Southern Indus Basin, formed as a result of Post-Jurassic uplift after the deposition of Middle Jurassic Chiltan Formation. Habib Rahi Formation of Middle to Late Eocene outcrops in the vicinity of Jacobabad Khairpur High, a section at Rohri near Sukkur is measured in detail for lithofacies, microfacies, diagenetic analysis and sequence stratigraphy. Habib Rahi Formation is richly fossiliferous and consists of mostly limestone with subordinate clays and marl. The total thickness of the formation in this section is 28.8m. The bottom of the formation is not exposed, while the upper contact with the Sirki Shale of the Middle Eocene age is unconformable in some places. A section is measured using Jacob’s Staff method, and traverses were made perpendicular to the strike. Four different lithofacies were identified based on outcrop geology which includes coarse-grained limestone facies (HR-1 to HR-5), massive bedded limestone facies (HR-6 HR-7), and micritic limestone facies (HR-8 to HR-13) and algal dolomitic limestone facie (HR-14). Total 14 rock samples were collected from outcrop for detailed petrographic studies, and thin sections of respective samples were prepared and analyzed under the microscope. On the basis of Dunham’s (1962) classification systems after studying textures, grain size, and fossil content and using Folk’s (1959) classification system after reviewing Allochems type, four microfacies were identified. These microfacies include HR-MF 1: Benthonic Foraminiferal Wackstone/Biomicrite Microfacies, HR-MF 2: Foramineral Nummulites Wackstone-Packstone/Biomicrite Microfacies HR-MF 3: Benthonic Foraminiferal Packstone/Biomicrite Microfacies, HR-MF 4: Bioclasts Carbonate Mudstone/Micrite Microfacies. The abundance of larger benthic Foraminifera’s (LBF), including Assilina sp., A. spiral abrade, A. granulosa, A. dandotica, A. laminosa, Nummulite sp., N. fabiani, N. stratus, N. globulus, Textularia, Bioclasts, and Red algae indicates shallow marine (Tidal Flat) environment of deposition. Based on variations in rock types, grain size, and marina fauna Habib Rahi Formation shows progradational stacking patterns, which indicates coarsening upward cycles. The second order of sea-level rise is identified (spanning from Y-Persian to Bartonian age) that represents the Transgressive System Tract (TST) and a third-order Regressive System Tract (RST) (spanning from Bartonian to Priabonian age). Diagenetic processes include fossils replacement by mud, dolomitization, pressure dissolution associated stylolites features and filling with dark organic matter. The presence of the microfossils includes Nummulite. striatus, N. fabiani, and Assilina. dandotica, signify Bartonian to Priabonian age of Habib Rahi Formation.

Keywords: Jacobabad Khairpur High, Habib Rahi Formation, lithofacies, microfacies, sequence stratigraphy, diagenetic history

Procedia PDF Downloads 466
16 Surface Plasmon Resonance Imaging-Based Epigenetic Assay for Blood DNA Post-Traumatic Stress Disorder Biomarkers

Authors: Judy M. Obliosca, Olivia Vest, Sandra Poulos, Kelsi Smith, Tammy Ferguson, Abigail Powers Lott, Alicia K. Smith, Yang Xu, Christopher K. Tison

Abstract:

Post-Traumatic Stress Disorder (PTSD) is a mental health problem that people may develop after experiencing traumatic events such as combat, natural disasters, and major emotional challenges. Tragically, the number of military personnel with PTSD correlates directly with the number of veterans who attempt suicide, with the highest rate in the Army. Research has shown epigenetic risks in those who are prone to several psychiatric dysfunctions, particularly PTSD. Once initiated in response to trauma, epigenetic alterations in particular, the DNA methylation in the form of 5-methylcytosine (5mC) alters chromatin structure and represses gene expression. Current methods to detect DNA methylation, such as bisulfite-based genomic sequencing techniques, are laborious and have massive analysis workflow while still having high error rates. A faster and simpler detection method of high sensitivity and precision would be useful in a clinical setting to confirm potential PTSD etiologies, prevent other psychiatric disorders, and improve military health. A nano-enhanced Surface Plasmon Resonance imaging (SPRi)-based assay that simultaneously detects site-specific 5mC base (termed as PTSD base) in methylated genes related to PTSD is being developed. The arrays on a sensing chip were first constructed for parallel detection of PTSD bases using synthetic and genomic DNA (gDNA) samples. For the gDNA sample extracted from the whole blood of a PTSD patient, the sample was first digested using specific restriction enzymes, and fragments were denatured to obtain single-stranded methylated target genes (ssDNA). The resulting mixture of ssDNA was then injected into the assay platform, where targets were captured by specific DNA aptamer probes previously immobilized on the surface of a sensing chip. The PTSD bases in targets were detected by anti-5-methylcytosine antibody (anti-5mC), and the resulting signals were then enhanced by the universal nanoenhancer. Preliminary results showed successful detection of a PTSD base in a gDNA sample. Brighter spot images and higher delta values (control-subtracted reflectivity signal) relative to those of the control were observed. We also implemented the in-house surface activation system for detection and developed SPRi disposable chips. Multiplexed PTSD base detection of target methylated genes in blood DNA from PTSD patients of severity conditions (asymptomatic and severe) was conducted. This diagnostic capability being developed is a platform technology, and upon successful implementation for PTSD, it could be reconfigured for the study of a wide variety of neurological disorders such as traumatic brain injury, Alzheimer’s disease, schizophrenia, and Huntington's disease and can be extended to the analyses of other sample matrices such as urine and saliva.

Keywords: epigenetic assay, DNA methylation, PTSD, whole blood, multiplexing

Procedia PDF Downloads 121
15 Triassic and Liassic Paleoenvironments during the Central Atlantic Magmatique Province (CAMP) Effusion in the Moroccan Coastal Meseta: The Mohammedia-Benslimane-El Gara-Berrechid Basin

Authors: Rachid Essamoud, Abdelkrim Afenzar, Ahmed Belqadi

Abstract:

During the Early Mesozoic, the northwestern part of the African continent was affected by initial fracturing associated with the early stages of the opening of the Central Atlantic (Atlantic Rift). During this rifting phase, the Moroccan Meseta experienced an extensive tectonic regime. This extension favored the formation of a set of rift-type basins, including the Mohammedia-Benslimane-ElGara-Berrechid basin. Thus, it is essential to know the nature of the deposits in this basin and their evolution over time as well as their relationship with the basaltic effusion of the Central Atlantic Magmatic Province (CAMP). These deposits are subdivided into two large series: The Lower clay-salt series attributed to the Triassic and the Upper clay-salt series attributed to the Liassic. The two series are separated by the Upper Triassic-Lower Liassic basaltic complex. The detailed sedimentological analysis made it possible to characterize four mega-sequences, fifteen types of facies and eight architectural elements and facies associations in the Triassic series. A progressive decrease observed in paleo-slope over time led to the evolution of the paleoenvironment from a proximal system of alluvial fans to a braided fluvial style, then to an anastomosed system. These environments eventually evolved into an alluvial plain associated with a coastal plain where playa lakes, mudflats and lagoons had developed. The pure and massive halitic facies at the top of the series probably indicate an evolution of the depositional environment towards a shallow subtidal environment. The presence of these evaporites indicates a climate that favored their precipitation, in this case, a fairly hot and humid climate. The sedimentological analysis of the supra-basaltic part shows that during the Lower Liassic, the paleopente after basaltic effusion remained weak with distal environments. The faciological analysis revealed the presence of four major sandstone, silty, clayey and evaporitic lithofacies organized in two mega-sequences: the sedimentation of the first rock-salt mega-sequence took place in a brine depression system free, followed by saline mudflats under continental influences. The upper clay mega-sequence displays facies documenting sea level fluctuations from the final transgression of the Tethys or the opening Atlantic. Saliferous sedimentation is therefore favored from the Upper Triassic, but experienced a sudden rupture by the emission of basaltic flows which are interstratified in the azoic salt clays of very shallow seas. This basaltic emission which belongs to the CAMP would come from a fissural volcanism probably carried out through transfer faults located in the NW and SE of the basin. Their emplacement is probably subaquatic to subaerial. From a chronological and paleogeographic point of view, this main volcanism, dated between the Upper Triassic and the Lower Liassic (180-200 MA), is linked to the fragmentation of Pangea and managed by a progressive expansion triggered in the West in close relation with the initial phases of Central Atlantic rifting and seems to coincide with the major mass extinction at the Triassic-Jurassic boundary.

Keywords: Basalt, CAMP, Liassic, sedimentology, Triassic, Morocco

Procedia PDF Downloads 74
14 Residential Building Facade Retrofit

Authors: Galit Shiff, Yael Gilad

Abstract:

The need to retrofit old buildings lies in the fact that buildings are responsible for the main energy use and CO₂ emission. Existing old structures are more dominant in their effect than new energy-efficient buildings. Nevertheless not every case of urban renewal that aims to replace old buildings with new neighbourhoods necessarily has a financial or sustainable justification. Façade design plays a vital role in the building's energy performance and the unit's comfort conditions. A retrofit façade residential methodology and feasibility applicative study has been carried out for the past four years, with two projects already fully renovated. The intention of this study is to serve as a case study for limited budget façade retrofit in Mediterranean climate urban areas. The two case study buildings are set in Israel. However, they are set in different local climatic conditions. One is in 'Sderot' in the south of the country, and one is in' Migdal Hahemek' in the north of the country. The building typology is similar. The budget of the projects is around $14,000 per unit and includes interventions at the buildings' envelope while tenants are living in. Extensive research and analysis of the existing conditions have been done. The building's components, materials and envelope sections were mapped, examined and compared to relevant updated standards. Solar radiation simulations for the buildings in their surroundings during winter and summer days were done. The energy rate of each unit, as well as the building as a whole, was calculated according to the Israeli Energy Code. The buildings’ facades were documented with the use of a thermal camera during different hours of the day. This information was superimposed with data about the electricity use and the thermal comfort that was collected from the residential units. Later in the process, similar tools were further used in order to compare the effectiveness of different design options and to evaluate the chosen solutions. Both projects showed that the most problematic units were the ones below the roof and the ones on top of the elevated entrance floor (pilotis). Old buildings tend to have poor insulation on those two horizontal surfaces which require treatment. Different radiation levels and wall sections in the two projects influenced the design strategies: In the southern project, there was an extreme difference in solar radiations levels between the main façade and the back elevation. Eventually, it was decided to invest in insulating the main south-west façade and the side façades, leaving the back north-east façade almost untouched. Lower levels of radiation in the northern project led to a different tactic: a combination of basic insulation on all façades, together with intense treatment on areas with problematic thermal behavior. While poor execution of construction details and bad installation of windows in the northern project required replacing them all, in the southern project it was found that it is more essential to shade the windows than replace them. Although the buildings and the construction typology was chosen for this study are similar, the research shows that there are large differences due to the location in different climatic zones and variation in local conditions. Therefore, in order to reach a systematic and cost-effective method of work, a more extensive catalogue database is needed. Such a catalogue will enable public housing companies in the Mediterranean climate to promote massive projects of renovating existing old buildings, drawing on minimal analysis and planning processes.

Keywords: facade, low budget, residential, retrofit

Procedia PDF Downloads 207