Search results for: engineering applications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8836

Search results for: engineering applications

136 Designing Automated Embedded Assessment to Assess Student Learning in a 3D Educational Video Game

Authors: Mehmet Oren, Susan Pedersen, Sevket C. Cetin

Abstract:

Despite the frequently criticized disadvantages of the traditional used paper and pencil assessment, it is the most frequently used method in our schools. Although assessments do an acceptable measurement, they are not capable of measuring all the aspects and the richness of learning and knowledge. Also, many assessments used in schools decontextualize the assessment from the learning, and they focus on learners’ standing on a particular topic but do not concentrate on how student learning changes over time. For these reasons, many scholars advocate that using simulations and games (S&G) as a tool for assessment has significant potentials to overcome the problems in traditionally used methods. S&G can benefit from the change in technology and provide a contextualized medium for assessment and teaching. Furthermore, S&G can serve as an instructional tool rather than a method to test students’ learning at a particular time point. To investigate the potentials of using educational games as an assessment and teaching tool, this study presents the implementation and the validation of an automated embedded assessment (AEA), which can constantly monitor student learning in the game and assess their performance without intervening their learning. The experiment was conducted on an undergraduate level engineering course (Digital Circuit Design) with 99 participant students over a period of five weeks in Spring 2016 school semester. The purpose of this research study is to examine if the proposed method of AEA is valid to assess student learning in a 3D Educational game and present the implementation steps. To address this question, this study inspects three aspects of the AEA for the validation. First, the evidence-centered design model was used to lay out the design and measurement steps of the assessment. Then, a confirmatory factor analysis was conducted to test if the assessment can measure the targeted latent constructs. Finally, the scores of the assessment were compared with an external measure (a validated test measuring student learning on digital circuit design) to evaluate the convergent validity of the assessment. The results of the confirmatory factor analysis showed that the fit of the model with three latent factors with one higher order factor was acceptable (RMSEA < 0.00, CFI =1, TLI=1.013, WRMR=0.390). All of the observed variables significantly loaded to the latent factors in the latent factor model. In the second analysis, a multiple regression analysis was used to test if the external measure significantly predicts students’ performance in the game. The results of the regression indicated the two predictors explained 36.3% of the variance (R2=.36, F(2,96)=27.42.56, p<.00). It was found that students’ posttest scores significantly predicted game performance (β = .60, p < .000). The statistical results of the analyses show that the AEA can distinctly measure three major components of the digital circuit design course. It was aimed that this study can help researchers understand how to design an AEA, and showcase an implementation by providing an example methodology to validate this type of assessment.

Keywords: educational video games, automated embedded assessment, assessment validation, game-based assessment, assessment design

Procedia PDF Downloads 421
135 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide

Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu

Abstract:

This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.

Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide

Procedia PDF Downloads 237
134 Multisensory Science, Technology, Engineering and Mathematics Learning: Combined Hands-on and Virtual Science for Distance Learners of Food Chemistry

Authors: Paulomi Polly Burey, Mark Lynch

Abstract:

It has been shown that laboratory activities can help cement understanding of theoretical concepts, but it is difficult to deliver such an activity to an online cohort and issues such as occupational health and safety in the students’ learning environment need to be considered. Chemistry, in particular, is one of the sciences where practical experience is beneficial for learning, however typical university experiments may not be suitable for the learning environment of a distance learner. Food provides an ideal medium for demonstrating chemical concepts, and along with a few simple physical and virtual tools provided by educators, analytical chemistry can be experienced by distance learners. Food chemistry experiments were designed to be carried out in a home-based environment that 1) Had sufficient scientific rigour and skill-building to reinforce theoretical concepts; 2) Were safe for use at home by university students and 3) Had the potential to enhance student learning by linking simple hands-on laboratory activities with high-level virtual science. Two main components of the resources were developed, a home laboratory experiment component, and a virtual laboratory component. For the home laboratory component, students were provided with laboratory kits, as well as a list of supplementary inexpensive chemical items that they could purchase from hardware stores and supermarkets. The experiments used were typical proximate analyses of food, as well as experiments focused on techniques such as spectrophotometry and chromatography. Written instructions for each experiment coupled with video laboratory demonstrations were used to train students on appropriate laboratory technique. Data that students collected in their home laboratory environment was collated across the class through shared documents, so that the group could carry out statistical analysis and experience a full laboratory experience from their own home. For the virtual laboratory component, students were able to view a laboratory safety induction and advised on good characteristics of a home laboratory space prior to carrying out their experiments. Following on from this activity, students observed laboratory demonstrations of the experimental series they would carry out in their learning environment. Finally, students were embedded in a virtual laboratory environment to experience complex chemical analyses with equipment that would be too costly and sensitive to be housed in their learning environment. To investigate the impact of the intervention, students were surveyed before and after the laboratory series to evaluate engagement and satisfaction with the course. Students were also assessed on their understanding of theoretical chemical concepts before and after the laboratory series to determine the impact on their learning. At the end of the intervention, focus groups were run to determine which aspects helped and hindered learning. It was found that the physical experiments helped students to understand laboratory technique, as well as methodology interpretation, particularly if they had not been in such a laboratory environment before. The virtual learning environment aided learning as it could be utilized for longer than a typical physical laboratory class, thus allowing further time on understanding techniques.

Keywords: chemistry, food science, future pedagogy, STEM education

Procedia PDF Downloads 168
133 Research on Reminiscence Therapy Game Design

Authors: Web Huei Chou, Li Yi Chun, Wenwe Yu, Han Teng Weng, H. Yuan, T. Yang

Abstract:

The prevalence of dementia is estimated to rise to 78 million by 2030 and 139 million by 2050. Among those affected, Alzheimer's disease is the most common form of dementia, contributing to 60–70% of cases. Addressing this growing challenge is crucial, especially considering the impact on older individuals and their caregivers. To reduce the behavioral and psychological symptoms of dementia, people with dementia use a variety of pharmaceutical and non-pharmacological treatments, and some studies have found the use of non-pharmacological interventions. Treatment of depression, cognitive function, and social activities has potential benefits. Butler developed reminiscence therapy as a method of treating dementia. Through ‘life review,’ individuals can recall their past events, activities, and experiences, which can reduce the depression of the elderly and improve their Quality of life to help give meaning to their lives and help them live independently. The life review process uses a variety of memory triggers, such as household items, past objects, photos, and music, and can be conducted collectively or individually and structured or unstructured. However, despite the advantages of nostalgia therapy, past research has always pointed out that current research lacks rigorous experimental evaluation and cannot describe clear research results and generalizability. Therefore, this study aims to study physiological sensing experiments to find a feasible experimental and verification method to provide clearer design and design specifications for reminiscence therapy and to provide a more widespread application for healthy aging. This study is an ongoing research project, a collaboration between the School of Design at Yunlin University of Science and Technology in Taiwan and the Department of Medical Engineering at Chiba University in Japan. We use traditional rice dishes from Taiwan and Japan as nostalgic content to construct a narrative structure for the elderly in the two countries respectively for life review activities, providing an easy-to-carry nostalgic therapy game with an intuitive interactive design. This experiment is expected to be completed in 36 months. The design team constructed and designed the game after conducting literary and historical data surveys and interviews with elders to confirm the nostalgic historical data in Taiwan and Japan. The Japanese team planned the Electrodermal Activity (EDA) and Blood Volume Pulse (BVP) experimental environments and Data calculation model, and then after conducting experiments on elderly people in two places, the research results were analyzed and discussed together. The research has completed the first 24 months of pre-study, design work, and pre-study and has entered the project acceptance stage.

Keywords: reminiscence therapy, aging health, design research, life review

Procedia PDF Downloads 32
132 Urban Seismic Risk Reduction in Algeria: Adaptation and Application of the RADIUS Methodology

Authors: Mehdi Boukri, Mohammed Naboussi Farsi, Mounir Naili, Omar Amellal, Mohamed Belazougui, Ahmed Mebarki, Nabila Guessoum, Brahim Mezazigh, Mounir Ait-Belkacem, Nacim Yousfi, Mohamed Bouaoud, Ikram Boukal, Aboubakr Fettar, Asma Souki

Abstract:

The seismic risk to which the urban centres are more and more exposed became a world concern. A co-operation on an international scale is necessary for an exchange of information and experiments for the prevention and the installation of action plans in the countries prone to this phenomenon. For that, the 1990s was designated as 'International Decade for Natural Disaster Reduction (IDNDR)' by the United Nations, whose interest was to promote the capacity to resist the various natural, industrial and environmental disasters. Within this framework, it was launched in 1996, the RADIUS project (Risk Assessment Tools for Diagnosis of Urban Areas Against Seismic Disaster), whose the main objective is to mitigate seismic risk in developing countries, through the development of a simple and fast methodological and operational approach, allowing to evaluate the vulnerability as well as the socio-economic losses, by probable earthquake scenarios in the exposed urban areas. In this paper, we will present the adaptation and application of this methodology to the Algerian context for the seismic risk evaluation in urban areas potentially exposed to earthquakes. This application consists to perform an earthquake scenario in the urban centre of Constantine city, located at the North-East of Algeria, which will allow the building seismic damage estimation of this city. For that, an inventory of 30706 building units was carried out by the National Earthquake Engineering Research Centre (CGS). These buildings were digitized in a data base which comprises their technical information by using a Geographical Information system (GIS), and then they were classified according to the RADIUS methodology. The study area was subdivided into 228 meshes of 500m on side and Ten (10) sectors of which each one contains a group of meshes. The results of this earthquake scenario highlights that the ratio of likely damage is about 23%. This severe damage results from the high concentration of old buildings and unfavourable soil conditions. This simulation of the probable seismic damage of the building and the GIS damage maps generated provide a predictive evaluation of the damage which can occur by a potential earthquake near to Constantine city. These theoretical forecasts are important for decision makers in order to take the adequate preventive measures and to develop suitable strategies, prevention and emergency management plans to reduce these losses. They can also help to take the adequate emergency measures in the most impacted areas in the early hours and days after an earthquake occurrence.

Keywords: seismic risk, mitigation, RADIUS, urban areas, Algeria, earthquake scenario, Constantine

Procedia PDF Downloads 262
131 The Role of People in Continuing Airworthiness: A Case Study Based on the Royal Thai Air Force

Authors: B. Ratchaneepun, N.S. Bardell

Abstract:

It is recognized that people are the main drivers in almost all the processes that affect airworthiness assurance. This is especially true in the area of aircraft maintenance, which is an essential part of continuing airworthiness. This work investigates what impact English language proficiency, the intersection of the military and Thai cultures, and the lack of initial and continuing human factors training have on the work performance of maintenance personnel in the Royal Thai Air Force (RTAF). A quantitative research method based on a cross-sectional survey was used to gather data about these three key aspects of “people” in a military airworthiness environment. 30 questions were developed addressing the crucial topics of English language proficiency, impact of culture, and human factors training. The officers and the non-commissioned officers (NCOs) who work for the Aeronautical Engineering Divisions in the RTAF comprised the survey participants. The survey data were analysed to support various hypotheses by using a t-test method. English competency in the RTAF is very important since all of the service manuals for Thai military aircraft are written in English. Without such competency, it is difficult for maintenance staff to perform tasks and correctly interpret the relevant maintenance manual instructions; any misunderstandings could lead to potential accidents. The survey results showed that the officers appreciated the importance of this more than the NCOs, who are the people actually doing the hands-on maintenance work. Military culture focuses on the success of a given mission, and leverages the power distance between the lower and higher ranks. In Thai society, a power distance also exists between younger and older citizens. In the RTAF, such a combination tends to inhibit a just reporting culture and hence hinders safety. The survey results confirmed this, showing that the older people and higher ranks involved with RTAF aircraft maintenance believe that the workplace has a positive safety culture and climate, whereas the younger people and lower ranks think the opposite. The final area of consideration concerned human factors training and non-technical skills training. The survey revealed that those participants who had previously attended such courses appreciated its value and were aware of its benefits in daily life. However, currently there is no regulation in the RTAF to mandate recurrent training to maintain such knowledge and skills. The findings from this work suggest that the people involved in assuring the continuing airworthiness of the RTAF would benefit from: (i) more rigorous requirements and standards in the recruitment, initial training and continuation training regarding English competence; (ii) the development of a strong safety culture that exploits the uniqueness of both the military culture and the Thai culture; and (iii) providing more initial and recurrent training in human factors and non-technical skills.

Keywords: aircraft maintenance, continuing airworthiness, military culture, people, Royal Thai Air Force

Procedia PDF Downloads 130
130 Using Scilab® as New Introductory Method in Numerical Calculations and Programming for Computational Fluid Dynamics (CFD)

Authors: Nicoly Coelho, Eduardo Vieira Vilas Boas, Paulo Orestes Formigoni

Abstract:

Faced with the remarkable developments in the various segments of modern engineering, provided by the increasing technological development, professionals of all educational areas need to overcome the difficulties generated due to the good understanding of those who are starting their academic journey. Aiming to overcome these difficulties, this article aims at an introduction to the basic study of numerical methods applied to fluid mechanics and thermodynamics, demonstrating the modeling and simulations with its substance, and a detailed explanation of the fundamental numerical solution for the use of finite difference method, using SCILAB, a free software easily accessible as it is free and can be used for any research center or university, anywhere, both in developed and developing countries. It is known that the Computational Fluid Dynamics (CFD) is a necessary tool for engineers and professionals who study fluid mechanics, however, the teaching of this area of knowledge in undergraduate programs faced some difficulties due to software costs and the degree of difficulty of mathematical problems involved in this way the matter is treated only in postgraduate courses. This work aims to bring the use of DFC low cost in teaching Transport Phenomena for graduation analyzing a small classic case of fundamental thermodynamics with Scilab® program. The study starts from the basic theory involving the equation the partial differential equation governing heat transfer problem, implies the need for mastery of students, discretization processes that include the basic principles of series expansion Taylor responsible for generating a system capable of convergence check equations using the concepts of Sassenfeld, finally coming to be solved by Gauss-Seidel method. In this work we demonstrated processes involving both simple problems solved manually, as well as the complex problems that required computer implementation, for which we use a small algorithm with less than 200 lines in Scilab® in heat transfer study of a heated plate in rectangular shape on four sides with different temperatures on either side, producing a two-dimensional transport with colored graphic simulation. With the spread of computer technology, numerous programs have emerged requiring great researcher programming skills. Thinking that this ability to program DFC is the main problem to be overcome, both by students and by researchers, we present in this article a hint of use of programs with less complex interface, thus enabling less difficulty in producing graphical modeling and simulation for DFC with an extension of the programming area of experience for undergraduates.

Keywords: numerical methods, finite difference method, heat transfer, Scilab

Procedia PDF Downloads 385
129 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 183
128 Identification of Three Strategies to Enhance University Students’ Professional Identity, Using Hierarchical Regression Analysis

Authors: Alba Barbara-i-Molinero, Rosalia Cascon-Pereira, Ana Beatriz Hernandez

Abstract:

Students’ transitions from high school to the university have been challenged by the lack of continuity between both contexts. This mismatch directly affects students by generating feelings of anxiety and uncertainty, which increases the dropout rates and reduces students’ academic success. This discontinuity emanates because ‘transitions concern a restructuring of what the person does and who the person perceives him or herself to be’. Hence, identity becomes essential in these transitions. Generally, identity is the answer to questions such as who am I? or who are we? This is integrated by personal identity, and as many social identities as groups, the individual feels he/she is a part. A case in point to construct a social identity is the identification with a profession. For this reason, a way to lighten the generated tension during transitions is applying strategies orientated to enhance students’ professional identity in their point of entry to the higher education institution. That would create a sense of continuity between high school and higher education contexts, increasing their Professional Identity Strength. To develop the strategies oriented to enhance students Professional Identity, it is important to analyze what influences it. There exist several influencing factors that influence Professional Identity (e.g., professional status, the recommendation of family and peers, the academic environment, or the chosen bachelor degree). There is a gap in the literature analyzing the impact of these factors on more than one bachelor degree. In this regards, our study takes an additional step with the aim of evaluating the influence of several factors on Professional Identity using a cohort of university students from multiple degrees between the ages of 17-19 years. To do so, we used hierarchical regression analyses to assess the impact of the following factors: External Motivation Conditionals (EMC), Educational Experience Conditionals (EEC) and Personal Motivational Conditional (PMP). After conducting the analyses, we found that the assessed factors influenced students’ professional identity differently according to their bachelor degree and discipline. For example, PMC and EMC positively affected science students, while architecture, law and economics and engineering students were just influenced by PMC. Basing on that influences, we proposed three different strategies aimed to enhance students’ professional identity, in the short and long term. These strategies are: to enhance students’ professional identity before the incorporation to university through campuses and icebreaker activities; to apply recruitment strategies aimed to provide realistic information of the bachelor degree; and to incorporate different activities, such as in-vitro, in situ and self-directed activities aimed to enhance longitudinally students’ professional identity from the university. From these results, theoretical contributions and practical implications arise. First, we contribute to the literature by identifying which factors influence students from different bachelor degrees since there is still no evidence. And, second, using as a benchmark the obtained results, we contribute from a practical perspective, by proposing several alternative strategies to increase students’ professional identity strength aiming to lighten their transition from high school to higher education.

Keywords: professional identity, higher education, educational strategies , students

Procedia PDF Downloads 144
127 Influence of Recycled Concrete Aggregate Content on the Rebar/Concrete Bond Properties through Pull-Out Tests and Acoustic Emission Measurements

Authors: L. Chiriatti, H. Hafid, H. R. Mercado-Mendoza, K. L. Apedo, C. Fond, F. Feugeas

Abstract:

Substituting natural aggregate with recycled aggregate coming from concrete demolition represents a promising alternative to face the issues of both the depletion of natural resources and the congestion of waste storage facilities. However, the crushing process of concrete demolition waste, currently in use to produce recycled concrete aggregate, does not allow the complete separation of natural aggregate from a variable amount of adhered mortar. Given the physicochemical characteristics of the latter, the introduction of recycled concrete aggregate into a concrete mix modifies, to a certain extent, both fresh and hardened concrete properties. As a consequence, the behavior of recycled reinforced concrete members could likely be influenced by the specificities of recycled concrete aggregates. Beyond the mechanical properties of concrete, and as a result of the composite character of reinforced concrete, the bond characteristics at the rebar/concrete interface have to be taken into account in an attempt to describe accurately the mechanical response of recycled reinforced concrete members. Hence, a comparative experimental campaign, including 16 pull-out tests, was carried out. Four concrete mixes with different recycled concrete aggregate content were tested. The main mechanical properties (compressive strength, tensile strength, Young’s modulus) of each concrete mix were measured through standard procedures. A single 14-mm-diameter ribbed rebar, representative of the diameters commonly used in the domain of civil engineering, was embedded into a 200-mm-side concrete cube. The resulting concrete cover is intended to ensure a pull-out type failure (i.e. exceedance of the rebar/concrete interface shear strength). A pull-out test carried out on the 100% recycled concrete specimen was enriched with exploratory acoustic emission measurements. Acoustic event location was performed by means of eight piezoelectric transducers distributed over the whole surface of the specimen. The resulting map was compared to existing data related to natural aggregate concrete. Damage distribution around the reinforcement and main features of the characteristic bond stress/free-end slip curve appeared to be similar to previous results obtained through comparable studies carried out on natural aggregate concrete. This seems to show that the usual bond mechanism sequence (‘chemical adhesion’, mechanical interlocking and friction) remains unchanged despite the addition of recycled concrete aggregate. However, the results also suggest that bond efficiency seems somewhat improved through the use of recycled concrete aggregate. This observation appears to be counter-intuitive with regard to the diminution of the main concrete mechanical properties with the recycled concrete aggregate content. As a consequence, the impact of recycled concrete aggregate content on bond characteristics seemingly represents an important factor which should be taken into account and likely to be further explored in order to determine flexural parameters such as deflection or crack distribution.

Keywords: acoustic emission monitoring, high-bond steel rebar, pull-out test, recycled aggregate concrete

Procedia PDF Downloads 171
126 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 89
125 System-Driven Design Process for Integrated Multifunctional Movable Concepts

Authors: Oliver Bertram, Leonel Akoto Chama

Abstract:

In today's civil transport aircraft, the design of flight control systems is based on the experience gained from previous aircraft configurations with a clear distinction between primary and secondary flight control functions for controlling the aircraft altitude and trajectory. Significant system improvements are now seen particularly in multifunctional moveable concepts where the flight control functions are no longer considered separate but integral. This allows new functions to be implemented in order to improve the overall aircraft performance. However, the classical design process of flight controls is sequential and insufficiently interdisciplinary. In particular, the systems discipline is involved only rudimentarily in the early phase. In many cases, the task of systems design is limited to meeting the requirements of the upstream disciplines, which may lead to integration problems later. For this reason, approaching design with an incremental development is required to reduce the risk of a complete redesign. Although the potential and the path to multifunctional moveable concepts are shown, the complete re-engineering of aircraft concepts with less classic moveable concepts is associated with a considerable risk for the design due to the lack of design methods. This represents an obstacle to major leaps in technology. This gap in state of the art is even further increased if, in the future, unconventional aircraft configurations shall be considered, where no reference data or architectures are available. This means that the use of the above-mentioned experience-based approach used for conventional configurations is limited and not applicable to the next generation of aircraft. In particular, there is a need for methods and tools for a rapid trade-off between new multifunctional flight control systems architectures. To close this gap in the state of the art, an integrated system-driven design process for multifunctional flight control systems of non-classical aircraft configurations will be presented. The overall goal of the design process is to find optimal solutions for single or combined target criteria in a fast process from the very large solution space for the flight control system. In contrast to the state of the art, all disciplines are involved for a holistic design in an integrated rather than a sequential process. To emphasize the systems discipline, this paper focuses on the methodology for designing moveable actuation systems in the context of this integrated design process of multifunctional moveables. The methodology includes different approaches for creating system architectures, component design methods as well as the necessary process outputs to evaluate the systems. An application example of a reference configuration is used to demonstrate the process and validate the results. For this, new unconventional hydraulic and electrical flight control system architectures are calculated which result from the higher requirements for multifunctional moveable concept. In addition to typical key performance indicators such as mass and required power requirements, the results regarding the feasibility and wing integration aspects of the system components are examined and discussed here. This is intended to show how the systems design can influence and drive the wing and overall aircraft design.

Keywords: actuation systems, flight control surfaces, multi-functional movables, wing design process

Procedia PDF Downloads 144
124 Steel Concrete Composite Bridge: Modelling Approach and Analysis

Authors: Kaviyarasan D., Satish Kumar S. R.

Abstract:

India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.

Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge

Procedia PDF Downloads 185
123 The Effect of Calcium Phosphate Composite Scaffolds on the Osteogenic Differentiation of Rabbit Dental Pulp Stem Cells

Authors: Ling-Ling E, Lin Feng, Hong-Chen Liu, Dong-Sheng Wang, Zhanping Shi, Juncheng Wang, Wei Luo, Yan Lv

Abstract:

The objective of this study was to compare the effects of the two calcium phosphate composite scaffolds on the attachment, proliferation and osteogenic differentiation of rabbit dental pulp stem cells (DPSCs). One nano-hydroxyapatite/collagen/poly (L-lactide) (nHAC/PLA), imitating the composition and the micro-structure characteristics of the natural bone, was made by Beijing Allgens Medical Science & Technology Co., Ltd. (China). The other beta-tricalcium phosphate (β-TCP), being fully interoperability globular pore structure, was provided by Shanghai Bio-lu Biomaterials Co, Ltd. (China). We compared the absorption water rate and the protein adsorption rate of two scaffolds and the characterization of DPSCs cultured on the culture plate and both scaffolds under osteogenic differentiation media (ODM) treatment. The constructs were then implanted subcutaneously into the back of severe combined immunodeficient (SCID) mice for 8 and 12 weeks to compare their bone formation capacity. The results showed that the ODM-treated DPSCs expressed osteocalcin (OCN), bone sialoprotein (BSP), type I collagen (COLI) and osteopontin (OPN) by immunofluorescence staining. Positive alkaline phosphatase (ALP) staining, calcium deposition and calcium nodules were also observed on the ODM-treated DPSCs. The nHAC/PLA had significantly higher absorption water rate and protein adsorption rate than ß-TCP. The initial attachment of DPSCs seeded onto nHAC/PLA was significantly higher than that onto ß-TCP; and the proliferation rate of the cells was significantly higher than that of ß-TCP on 1, 3 and 7 days of cell culture. DPSCs+ß-TCP had significantly higher ALP activity, calcium/phosphorus content and mineral formation than DPSCs+nHAC/PLA. When implanted into the back of SCID mice, nHAC/PLA alone had no new bone formation, newly formed mature bone and osteoid were only observed in β-TCP alone, DPSCs+nHAC/PLA and DPSCs+β-TCP, and this three groups displayed increased bone formation over the 12-week period. The percentage of total bone formation area had no difference between DPSCs+β-TCP and DPSCs+nHAC/PLA at each time point,but the percentage of mature bone formation area of DPSCs+β-TCP was significantly higher than that of DPSCs+nHAC/PLA. Our results demonstrated that the DPSCs on nHAC/PLA had a better proliferation and that the DPSCs on β-TCP had a more mineralization in vitro, much more newly formed mature bones in vivo were presented in DPSCs+β-TCP group. These findings have provided a further knowledge that scaffold architecture has a different influence on the attachment, proliferation and differentiation of cells. This study may provide insight into the clinical periodontal bone tissue repair with DPSCs+β-TCP construct.

Keywords: dental pulp stem cells, nano-hydroxyapatite/collagen/poly(L-lactide), beta-tricalcium phosphate, periodontal tissue engineering, bone regeneration

Procedia PDF Downloads 333
122 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures

Authors: A. T. Al-Isawi, P. E. F. Collins

Abstract:

The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.

Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction

Procedia PDF Downloads 121
121 Data Calibration of the Actual versus the Theoretical Micro Electro Mechanical Systems (MEMS) Based Accelerometer Reading through Remote Monitoring of Padre Jacinto Zamora Flyover

Authors: John Mark Payawal, Francis Aldrine Uy, John Paul Carreon

Abstract:

This paper shows the application of Structural Health Monitoring, SHM into bridges. Bridges are structures built to provide passage over a physical obstruction such as rivers, chasms or roads. The Philippines has a total of 8,166 national bridges as published on the 2015 atlas of the Department of Public Works and Highways (DPWH) and only 2,924 or 35.81% of these bridges are in good condition. As a result, PHP 30.464 billion of the 2016 budget of DPWH is allocated on roads and/or bridges maintenance alone. Intensive spending is owed to the present practice of outdated manual inspection and assessment, and poor structural health monitoring of Philippine infrastructures. As the School of Civil, Environmental, & Geological Engineering of Mapua Institute of Technology (MIT) continuous its well driven passion in research based projects, a partnership with the Department of Science and Technology (DOST) and the DPWH launched the application of Structural Health Monitoring, (SHM) in Padre Jacinto Zamora Flyover. The flyover is located along Nagtahan Boulevard in Sta. Mesa, Manila that connects Brgy. 411 and Brgy. 635. It gives service to vehicles going from Lacson Avenue to Mabini Bridge passing over Legarda Flyover. The flyover is chosen among the many located bridges in Metro Manila as the focus of the pilot testing due to its site accessibility, and complete structural built plans and specifications necessary for SHM as provided by the Bureau of Design, BOD department of DPWH. This paper focuses on providing a method to calibrate theoretical readings from STAAD Vi8 Pro and sync the data to actual MEMS accelerometer readings. It is observed that while the design standards used in constructing the flyover was reflected on the model, actual readings of MEMS accelerometer display a large difference compared to the theoretical data ran and taken from STAAD Vi8 Pro. In achieving a true seismic response of the modeled bridge or hence syncing the theoretical data to the actual sensor reading also called as the independent variable of this paper, analysis using single degree of freedom (SDOF) of the flyover under free vibration without damping using STAAD Vi8 Pro is done. The earthquake excitation and bridge responses are subjected to earthquake ground motion in the form of ground acceleration or Peak Ground Acceleration, PGA. Translational acceleration load is used to simulate the ground motion of the time history analysis acceleration record in STAAD Vi8 Pro.

Keywords: accelerometer, analysis using single degree of freedom, micro electro mechanical system, peak ground acceleration, structural health monitoring

Procedia PDF Downloads 319
120 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction

Authors: Sundaram Chandrasekaran, Seung Hyun Hur

Abstract:

Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.

Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting

Procedia PDF Downloads 294
119 Bioinformatic Prediction of Hub Genes by Analysis of Signaling Pathways, Transcriptional Regulatory Networks and DNA Methylation Pattern in Colon Cancer

Authors: Ankan Roy, Niharika, Samir Kumar Patra

Abstract:

Anomalous nexus of complex topological assemblies and spatiotemporal epigenetic choreography at chromosomal territory may forms the most sophisticated regulatory layer of gene expression in cancer. Colon cancer is one of the leading malignant neoplasms of the lower gastrointestinal tract worldwide. There is still a paucity of information about the complex molecular mechanisms of colonic cancerogenesis. Bioinformatics prediction and analysis helps to identify essential genes and significant pathways for monitoring and conquering this deadly disease. The present study investigates and explores potential hub genes as biomarkers and effective therapeutic targets for colon cancer treatment. Colon cancer patient sample containing gene expression profile datasets, such as GSE44076, GSE20916, and GSE37364 were downloaded from Gene Expression Omnibus (GEO) database and thoroughly screened using the GEO2R tool and Funrich software to find out common 2 differentially expressed genes (DEGs). Other approaches, including Gene Ontology (GO) and KEGG pathway analysis, Protein-Protein Interaction (PPI) network construction and hub gene investigation, Overall Survival (OS) analysis, gene correlation analysis, methylation pattern analysis, and hub gene-Transcription factors regulatory network construction, were performed and validated using various bioinformatics tool. Initially, we identified 166 DEGs, including 68 up-regulated and 98 down-regulated genes. Up-regulated genes are mainly associated with the Cytokine-cytokine receptor interaction, IL17 signaling pathway, ECM-receptor interaction, Focal adhesion and PI3K-Akt pathway. Downregulated genes are enriched in metabolic pathways, retinol metabolism, Steroid hormone biosynthesis, and bile secretion. From the protein-protein interaction network, thirty hub genes with high connectivity are selected using the MCODE and cytoHubba plugin. Survival analysis, expression validation, correlation analysis, and methylation pattern analysis were further verified using TCGA data. Finally, we predicted COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as potential master regulators in colonic cancerogenesis. Moreover, our experimental data highlights that disruption of lipid raft and RAS/MAPK signaling cascade affects this gene hub at mRNA level. We identified COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as determinant hub genes in colon cancer progression. They can be considered as biomarkers for diagnosis and promising therapeutic targets in colon cancer treatment. Additionally, our experimental data advertise that signaling pathway act as connecting link between membrane hub and gene hub.

Keywords: hub genes, colon cancer, DNA methylation, epigenetic engineering, bioinformatic predictions

Procedia PDF Downloads 128
118 Aspects Concerning the Use of Recycled Concrete Aggregates

Authors: Ion Robu, Claudiu Mazilu, Radu Deju

Abstract:

Natural aggregates (gravel and crushed) are essential non-renewable resources which are used for infrastructure works and civil engineering. In European Union member states from Southeast Europe, it is estimated that the construction industry will grow by 4.2% thereafter complicating aggregate supply management. In addition, a significant additional problem that can be associated to the aggregates industry is wasting potential resources through waste dumping of inert waste, especially waste from construction and demolition activities. In 2012, in Romania, less than 10% of construction and demolition waste (including concrete) are valorized, while the European Union requires that by 2020 this proportion should be at least 70% (Directive 2008/98/EC on waste, transposed into Romanian legislation by Law 211/2011). Depending on the efficiency of waste processing and the quality of recycled aggregate concrete (RCA) obtained, poor quality aggregate can be used as foundation material for roads and at the high quality for new concrete on construction. To obtain good quality concrete using recycled aggregate is necessary to meet the minimum requirements defined by the rules for the manufacture of concrete with natural aggregate. Properties of recycled aggregate (density, granulosity, granule shape, water absorption, weight loss to Los Angeles test, attached mortar content etc.) are the basis for concrete quality; also establishing appropriate proportions between components and the concrete production methods are extremely important for its quality. This paper presents a study on the use of recycled aggregates, from a concrete of specified class, to acquire new cement concrete with different percentages of recycled aggregates. To achieve recycled aggregates several batches of concrete class C16/20, C25/30 and C35/45 were made, the compositions calculation being made according NE012/2007 CP012/2007. Tests for producing recycled aggregate was carried out using concrete samples of the established three classes after 28 days of storage under the above conditions. Cubes with 150mm side were crushed in a first stage with a jaw crusher Liebherr type set at 50 mm nominally. The resulting material was separated by sieving on granulometric sorts and 10-50 sort was used for preliminary tests of crushing in the second stage with a jaw crusher BB 200 Retsch model, respectively a hammer crusher Buffalo Shuttle WA-12-H model. It was highlighted the influence of the type of crusher used to obtain recycled aggregates on granulometry and granule shape and the influence of the attached mortar on the density, water absorption, behavior to the Los Angeles test etc. The proportion of attached mortar was determined and correlated with provenance concrete class of the recycled aggregates and their granulometric sort. The aim to characterize the recycled aggregates is their valorification in new concrete used in construction. In this regard have been made a series of concrete in which the recycled aggregate content was varied from 0 to 100%. The new concrete were characterized by point of view of the change in the density and compressive strength with the proportion of recycled aggregates. It has been shown that an increase in recycled aggregate content not necessarily mean a reduction in compressive strength, quality of the aggregate having a decisive role.

Keywords: recycled concrete aggregate, characteristics, recycled aggregate concrete, properties

Procedia PDF Downloads 212
117 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 262
116 Stability of a Natural Weak Rock Slope under Rapid Water Drawdowns: Interaction between Guadalfeo Viaduct and Rules Reservoir, Granada, Spain

Authors: Sonia Bautista Carrascosa, Carlos Renedo Sanchez

Abstract:

The effect of a rapid drawdown is a classical scenario to be considered in slope stability under submerged conditions. This situation arises when totally or partially submerged slopes experience a descent of the external water level and is a typical verification to be done in a dam engineering discipline, as reservoir water levels commonly fluctuate noticeably during seasons and due to operational reasons. Although the scenario is well known and predictable in general, site conditions can increase the complexity of its assessment and external factors are not always expected, can cause a reduction in the stability or even a failure in a slope under a rapid drawdown situation. The present paper describes and discusses the interaction between two different infrastructures, a dam and a highway, and the impact on the stability of a natural rock slope overlaid by the north abutment of a viaduct of the A-44 Highway due to the rapid drawdown of the Rules Dam, in the province of Granada (south of Spain). In the year 2011, with both infrastructures, the A-44 Highway and the Rules Dam already constructed, delivered and under operation, some movements start to be recorded in the approximation embankment and north abutment of the Guadalfeo Viaduct, included in the highway and developed to solve the crossing above the tail of the reservoir. The embankment and abutment were founded in a low-angle natural rock slope formed by grey graphic phyllites, distinctly weathered and intensely fractured, with pre-existing fault and weak planes. After the first filling of the reservoir, to a relative level of 243m, three consecutive drawdowns were recorded in the autumns 2010, 2011 and 2012, to relative levels of 234m, 232m and 225m. To understand the effect of these drawdowns in the weak rock mass strength and in its stability, a new geological model was developed, after reviewing all the available ground investigations, updating the geological mapping of the area and supplemented with an additional geotechnical and geophysical investigations survey. Together with all this information, rainfall and reservoir level evolution data have been reviewed in detail to incorporate into the monitoring interpretation. The analysis of the monitoring data and the new geological and geotechnical interpretation, supported by the use of limit equilibrium software Slide2, concludes that the movement follows the same direction as the schistosity of the phyllitic rock mass, coincident as well with the direction of the natural slope, indicating a deep-seated movement of the whole slope towards the reservoir. As part of these conclusions, the solutions considered to reinstate the highway infrastructure to the required FoS will be described, and the geomechanical characterization of these weak rocks discussed, together with the influence of water level variations, not only in the water pressure regime but in its geotechnical behavior, by the modification of the strength parameters and deformability.

Keywords: monitoring, rock slope stability, water drawdown, weak rock

Procedia PDF Downloads 160
115 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.

Keywords: autopoiesis, nanoparticles, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system

Procedia PDF Downloads 124
114 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 137
113 Oil-price Volatility and Economic Prosperity in Nigeria: Empirical Evidence

Authors: Yohanna Panshak

Abstract:

The impact of macroeconomic instability on economic growth and prosperity has been at forefront in many discourses among researchers and policy makers and has generated a lot of controversies over the years. This has generated series of research efforts towards understanding the remote causes of this phenomenon; its nature, determinants and how it can be targeted and mitigated. While others have opined that the root cause of macroeconomic flux in Nigeria is attributed to Oil-Price volatility, others viewed the issue as resulting from some constellation of structural constraints both within and outside the shores of the country. Research works of scholars such as [Akpan (2009), Aliyu (2009), Olomola (2006), etc] argue that oil volatility can determine economic growth or has the potential of doing so. On the contrary, [Darby (1982), Cerralo (2005) etc] share the opinion that it can slow down growth. The earlier argument rest on the understanding that for a net balance of oil exporting economies, price upbeat directly increases real national income through higher export earnings, whereas, the latter allude to the case of net-oil importing countries (which experience price rises, increased input costs, reduced non-oil demand, low investment, fall in tax revenues and ultimately an increase in budget deficit which will further reduce welfare level). Therefore, assessing the precise impact of oil price volatility on virtually any economy is a function of whether it is an oil-exporting or importing nation. Research on oil price volatility and its outcome on the growth of the Nigerian economy are evolving and in a march towards resolving Nigeria’s macroeconomic instability as long as oil revenue still remain the mainstay and driver of socio-economic engineering. Recently, a major importer of Nigeria’s oil- United States made a historic breakthrough in more efficient source of energy for her economy with the capacity of serving significant part of the world. This undoubtedly suggests a threat to the exchange earnings of the country. The need to understand fluctuation in its major export commodity is critical. This paper leans on the Renaissance growth theory with greater focus on theoretical work of Lee (1998); a leading proponent of this school who makes a clear cut of difference between oil price changes and oil price volatility. Based on the above background, the research seeks to empirically examine the impact oil-price volatility on government expenditure using quarterly time series data spanning 1986:1 to 2014:4. Vector Auto Regression (VAR) econometric approach shall be used. The structural properties of the model shall be tested using Augmented Dickey-Fuller and Phillips-Perron. Relevant diagnostics tests of heteroscedasticity, serial correlation and normality shall also be carried out. Policy recommendation shall be offered on the empirical findings and believes it assist policy makers not only in Nigeria but the world-over.

Keywords: oil-price, volatility, prosperity, budget, expenditure

Procedia PDF Downloads 270
112 Thermosensitive Hydrogel Development for Its Possible Application in Cardiac Cell Therapy

Authors: Lina Paola Orozco Marin, Yuliet Montoya Osorio, John Bustamante Osorno

Abstract:

Ischemic events can culminate in acute myocardial infarction by irreversible cardiac lesions that cannot be restored due to the limited regenerative capacity of the heart. Cell therapy seeks to replace these injured or necrotic cells by transplanting healthy and functional cells. The therapeutic alternatives proposed by tissue engineering and cardiovascular regenerative medicine are the use of biomaterials to mimic the native extracellular medium, which is full of proteins, proteoglycans, and glycoproteins. The selected biomaterials must provide structural support to the encapsulated cells to avoid their migration and death in the host tissue. In this context, the present research work focused on developing a natural thermosensitive hydrogel, its physical and chemical characterization, and the determination of its biocompatibility in vitro. The hydrogel was developed by mixing hydrolyzed bovine and porcine collagen at 2% w/v, chitosan at 2.5% w/v, and beta-glycerolphosphate at 8.5% w/w and 10.5% w/w in magnetic stirring at 4°C. Once obtained, the thermosensitivity and gelation time were determined, incubating the samples at 37°C and evaluating them through the inverted tube method. The morphological characterization of the hydrogels was carried out through scanning electron microscopy. Chemical characterization was carried out employing infrared spectroscopy. The biocompatibility was determined using the MTT cytotoxicity test according to the ISO 10993-5 standard for the hydrogel’s precursors using the fetal human ventricular cardiomyocytes cell line RL-14. The RL-14 cells were also seeded on the top of the hydrogels, and the supernatants were subculture at different periods to their observation under a bright field microscope. Four types of thermosensitive hydrogels were obtained, which differ in their composition and concentration, called A1 (chitosan/bovine collagen/beta-glycerolphosphate 8.5%w/w), A2 (chitosan/porcine collagen/beta-glycerolphosphate 8.5%), B1 (chitosan/bovine collagen/beta-glycerolphosphate 10.5%) and B2 (chitosan/porcine collagen/beta-glycerolphosphate 10.5%). A1 and A2 had a gelation time of 40 minutes, and B1 and B2 had a gelation time of 30 minutes at 37°C. Electron micrographs revealed a three-dimensional internal structure with interconnected pores for the four types of hydrogels. This facilitates the exchange of nutrients, oxygen, and the exit of metabolites, allowing to preserve a microenvironment suitable for cell proliferation. In the infrared spectra, it was possible to observe the interaction that occurs between the amides of polymeric compounds with the phosphate groups of beta-glycerolphosphate. Finally, the biocompatibility tests indicated that cells in contact with the hydrogel or with each of its precursors are not affected in their proliferation capacity for a period of 16 days. These results show the potential of the hydrogel to increase the cell survival rate in the cardiac cell therapies under investigation. Moreover, the results lay the foundations for its characterization and biological evaluation in both in vitro and in vivo models.

Keywords: cardiac cell therapy, cardiac ischemia, natural polymers, thermosensitive hydrogel

Procedia PDF Downloads 190
111 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 331
110 Identifying Confirmed Resemblances in Problem-Solving Engineering, Both in the Past and Present

Authors: Colin Schmidt, Adrien Lecossier, Pascal Crubleau, Philippe Blanchard, Simon Richir

Abstract:

Introduction:The widespread availability of artificial intelligence, exemplified by Generative Pre-trained Transformers (GPT) relying on large language models (LLM), has caused a seismic shift in the realm of knowledge. Everyone now has the capacity to swiftly learn how these models can either serve them well or not. Today, conversational AI like ChatGPT is grounded in neural transformer models, a significant advance in natural language processing facilitated by the emergence of renowned LLMs constructed using neural transformer architecture. Inventiveness of an LLM : OpenAI's GPT-3 stands as a premier LLM, capable of handling a broad spectrum of natural language processing tasks without requiring fine-tuning, reliably producing text that reads as if authored by humans. However, even with an understanding of how LLMs respond to questions asked, there may be lurking behind OpenAI’s seemingly endless responses an inventive model yet to be uncovered. There may be some unforeseen reasoning emerging from the interconnection of neural networks here. Just as a Soviet researcher in the 1940s questioned the existence of Common factors in inventions, enabling an Under standing of how and according to what principles humans create them, it is equally legitimate today to explore whether solutions provided by LLMs to complex problems also share common denominators. Theory of Inventive Problem Solving (TRIZ) : We will revisit some fundamentals of TRIZ and how Genrich ALTSHULLER was inspired by the idea that inventions and innovations are essential means to solve societal problems. It's crucial to note that traditional problem-solving methods often fall short in discovering innovative solutions. The design team is frequently hampered by psychological barriers stemming from confinement within a highly specialized knowledge domain that is difficult to question. We presume ChatGPT Utilizes TRIZ 40. Hence, the objective of this research is to decipher the inventive model of LLMs, particularly that of ChatGPT, through a comparative study. This will enhance the efficiency of sustainable innovation processes and shed light on how the construction of a solution to a complex problem was devised. Description of the Experimental Protocol : To confirm or reject our main hypothesis that is to determine whether ChatGPT uses TRIZ, we will follow a stringent protocol that we will detail, drawing on insights from a panel of two TRIZ experts. Conclusion and Future Directions : In this endeavor, we sought to comprehend how an LLM like GPT addresses complex challenges. Our goal was to analyze the inventive model of responses provided by an LLM, specifically ChatGPT, by comparing it to an existing standard model: TRIZ 40. Of course, problem solving is our main focus in our endeavours.

Keywords: artificial intelligence, Triz, ChatGPT, inventiveness, problem-solving

Procedia PDF Downloads 73
109 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 157
108 A Design Methodology and Tool to Support Ecodesign Implementation in Induction Hobs

Authors: Anna Costanza Russo, Daniele Landi, Michele Germani

Abstract:

Nowadays, the European Ecodesign Directive has emerged as a new approach to integrate environmental concerns into the product design and related processes. Ecodesign aims to minimize environmental impacts throughout the product life cycle, without compromising performances and costs. In addition, the recent Ecodesign Directives require products which are increasingly eco-friendly and eco-efficient, preserving high-performances. It is very important for producers measuring performances, for electric cooking ranges, hobs, ovens, and grills for household use, and a low power consumption of appliances represents a powerful selling point, also in terms of ecodesign requirements. The Ecodesign Directive provides a clear framework about the sustainable design of products and it has been extended in 2009 to all energy-related products, or products with an impact on energy consumption during the use. The European Regulation establishes measures of ecodesign of ovens, hobs, and kitchen hoods, and domestic use and energy efficiency of a product has a significant environmental aspect in the use phase which is the most impactful in the life cycle. It is important that the product parameters and performances are not affected by ecodesign requirements from a user’s point of view, and the benefits of reducing energy consumption in the use phase should offset the possible environmental impact in the production stage. Accurate measurements of cooking appliance performance are essential to help the industry to produce more energy efficient appliances. The development of ecodriven products requires ecoinnovation and ecodesign tools to support the sustainability improvement. The ecodesign tools should be practical and focused on specific ecoobjectives in order to be largely diffused. The main scope of this paper is the development, implementation, and testing of an innovative tool, which could be an improvement for the sustainable design of induction hobs. In particular, a prototypical software tool is developed in order to simulate the energy performances of the induction hobs. The tool is focused on a multiphysics model which is able to simulate the energy performances and the efficiency of induction hobs starting from the design data. The multiphysics model is composed by an electromagnetic simulation and a thermal simulation. The electromagnetic simulation is able to calculate the eddy current induced in the pot, which leads to the Joule heating of material. The thermal simulation is able to measure the energy consumption during the operational phase. The Joule heating caused from the eddy currents is the output of electromagnetic simulation and the input of thermal ones. The aims of the paper are the development of integrated tools and methodologies of virtual prototyping in the context of the ecodesign. This tool could be a revolutionary instrument in the field of industrial engineering and it gives consideration to the environmental aspects of product design and focus on the ecodesign of energy-related products, in order to achieve a reduced environmental impact.

Keywords: ecodesign, energy efficiency, induction hobs, virtual prototyping

Procedia PDF Downloads 250
107 Evaluation in Vitro and in Silico of Pleurotus ostreatus Capacity to Decrease the Amount of Low-Density Polyethylene Microplastics Present in Water Sample from the Middle Basin of the Magdalena River, Colombia

Authors: Loren S. Bernal., Catalina Castillo, Carel E. Carvajal, José F. Ibla

Abstract:

Plastic pollution, specifically microplastics, has become a significant issue in aquatic ecosystems worldwide. The large amount of plastic waste carried by water tributaries has resulted in the accumulation of microplastics in water bodies. The polymer aging process caused by environmental influences such as photodegradation and chemical degradation of additives leads to polymer embrittlement and properties change that require degradation or reduction procedures in rivers. However, there is a lack of such procedures for freshwater entities that develop over extended periods. The aim of this study is evaluate the potential of Pleurotus ostreatus a fungus, in reducing lowdensity polyethylene microplastics present in freshwater samples collected from the middle basin of the Magdalena River in Colombia. The study aims to evaluate this process both in vitro and in silico by identifying the growth capacity of Pleurotus ostreatus in the presence of microplastics and identifying the most likely interactions of Pleurotus ostreatus enzymes and their affinity energies. The study follows an engineering development methodology applied on an experimental basis. The in vitro evaluation protocol applied in this study focused on the growth capacity of Pleurotus ostreatus on microplastics using enzymatic inducers. In terms of in silico evaluation, molecular simulations were conducted using the Autodock 1.5.7 program to calculate interaction energies. The molecular dynamics were evaluated by using the myPresto Portal and GROMACS program to calculate radius of gyration and Energies.The results of the study showed that Pleurotus ostreatus has the potential to degrade low-density polyethylene microplastics. The in vitro evaluation revealed the adherence of Pleurotus ostreatus to LDPE using scanning electron microscopy. The best results were obtained with enzymatic inducers as a MnSO4 generating the activation of laccase or manganese peroxidase enzymes in the degradation process. The in silico modelling demonstrated that Pleurotus ostreatus was able to interact with the microplastics present in LDPE, showing affinity energies in molecular docking and molecular dynamics shown a minimum energy and the representative radius of gyration between each enzyme and its substract. The study contributes to the development of bioremediation processes for the removal of microplastics from freshwater sources using the fungus Pleurotus ostreatus. The in silico study provides insights into the affinity energies of Pleurotus ostreatus microplastic degrading enzymes and their interaction with low-density polyethylene. The study demonstrated that Pleurotus ostreatus can interact with LDPE microplastics, making it a good agent for the development of bioremediation processes that aid in the recovery of freshwater sources. The results of the study suggested that bioremediation could be a promising approach to reduce microplastics in freshwater systems.

Keywords: bioremediation, in silico modelling, microplastics, Pleurotus ostreatus

Procedia PDF Downloads 114