Search results for: maximum efficiency
937 Economic Evaluation of an Advanced Bioethanol Manufacturing Technology Using Maize as a Feedstock in South Africa
Authors: Ayanda Ndokwana, Stanley Fore
Abstract:
Industrial prosperity and rapid expansion of human population in South Africa over the past two decades, have increased the use of conventional fossil fuels such as crude oil, coal and natural gas to meet the country’s energy demands. However, the inevitable depletion of fossil fuel reserves, global volatile oil price and large carbon footprint are some of the crucial reasons the South African Government needs to make a considerable investment in the development of the biofuel industry. In South Africa, this industry is still at the introductory stage with no large scale manufacturing plant that has been commissioned yet. Bioethanol is a potential replacement of gasoline which is a fossil fuel that is used in motor vehicles. Using bioethanol for the transport sector as a source of fuel will help Government to save heavy foreign exchange incurred during importation of oil and create many job opportunities in rural farming. In 2007, the South African Government developed the National Biofuels Industrial Strategy in an effort to make provision for support and attract investment in bioethanol production. However, capital investment in the production of bioethanol on a large scale, depends on the sound economic assessment of the available manufacturing technologies. The aim of this study is to evaluate the profitability of an advanced bioethanol manufacturing technology which uses maize as a feedstock in South Africa. The impact of fiber or bran fractionation in this technology causes it to possess a number of merits such as energy efficiency, low capital expenditure, and profitability compared to a conventional dry-mill bioethanol technology. Quantitative techniques will be used to collect and analyze numerical data from suitable organisations in South Africa. The dependence of three profitability indicators such as the Discounted Payback Period (DPP), Net Present Value (NPV) and Return On Investment (ROI) on plant capacity will be evaluated. Profitability analysis will be done on the following plant capacities: 100 000 ton/year, 150 000 ton/year and 200 000 ton/year. The plant capacity with the shortest Discounted Payback Period, positive Net Present Value and highest Return On Investment implies that a further consideration in terms of capital investment is warranted.Keywords: bioethanol, economic evaluation, maize, profitability indicators
Procedia PDF Downloads 233936 Surface Enhanced Infrared Absorption for Detection of Ultra Trace of 3,4- Methylene Dioxy- Methamphetamine (MDMA)
Authors: Sultan Ben Jaber
Abstract:
Optical properties of molecules exhibit dramatic changes when adsorbed close to nano-structure metallic surfaces such as gold and silver nanomaterial. This phenomena opened a wide range of research to improve conventional spectroscopies efficiency. A well-known technique that has an intensive focus of study is surface-enhanced Raman spectroscopy (SERS), as since the first observation of SERS phenomena, researchers have published a great number of articles about the potential mechanisms behind this effect as well as developing materials to maximize the enhancement. Infrared and Raman spectroscopy are complementary techniques; thus, surface-enhanced infrared absorption (SEIRA) also shows a noticeable enhancement of molecules in the mid-IR excitation on nonmetallic structure substrates. In the SEIRA, vibrational modes that gave change in dipole moments perpendicular to the nano-metallic substrate enhanced 200 times greater than the free molecule’s modes. SEIRA spectroscopy is promising for the characterization and identification of adsorbed molecules on metallic surfaces, especially at trace levels. IR reflection-absorption spectroscopy (IRAS) is a well-known technique for measuring IR spectra of adsorbed molecules on metallic surfaces. However, SEIRA spectroscopy sensitivity is up to 50 times higher than IRAS. SEIRA enhancement has been observed for a wide range of molecules adsorbed on metallic substrates such as Au, Ag, Pd, Pt, Al, and Ni, but Au and Ag substrates exhibited the highest enhancement among the other mentioned substrates. In this work, trace levels of 3,4-methylenedioxymethamphetamine (MDMA) have been detected using gold nanoparticles (AuNPs) substrates with surface-enhanced infrared absorption (SEIRA). AuNPs were first prepared and washed, then mixed with different concentrations of MDMA samples. The process of fabricating the substrate prior SEIRA measurements included mixing of AuNPs and MDMA samples followed by vigorous stirring. The stirring step is particularly crucial, as stirring allows molecules to be robustly adsorbed on AuNPs. Thus, remarkable SEIRA was observed for MDMA samples even at trace levels, showing the rigidity of our approach to preparing SEIRA substrates.Keywords: surface-enhanced infrared absorption (SEIRA), gold nanoparticles (AuNPs), amphetamines, methylene dioxy- methamphetamine (MDMA), enhancement factor
Procedia PDF Downloads 70935 Navigating through Organizational Change: TAM-Based Manual for Digital Skills and Safety Transitions
Authors: Margarida Porfírio Tomás, Paula Pereira, José Palma Oliveira
Abstract:
Robotic grasping is advancing rapidly, but transferring techniques from rigid to deformable objects remains a challenge. Deformable and flexible items, such as food containers, demand nuanced handling due to their changing shapes. Bridging this gap is crucial for applications in food processing, surgical robotics, and household assistance. AGILEHAND, a Horizon project, focuses on developing advanced technologies for sorting, handling, and packaging soft and deformable products autonomously. These technologies serve as strategic tools to enhance flexibility, agility, and reconfigurability within the production and logistics systems of European manufacturing companies. Key components include intelligent detection, self-adaptive handling, efficient sorting, and agile, rapid reconfiguration. The overarching goal is to optimize work environments and equipment, ensuring both efficiency and safety. As new technologies emerge in the food industry, there will be some implications, such as labour force, safety problems and acceptance of the new technologies. To overcome these implications, AGILEHAND emphasizes the integration of social sciences and humanities, for example, the application of the Technology Acceptance Model (TAM). The project aims to create a change management manual, that will outline strategies for developing digital skills and managing health and safety transitions. It will also provide best practices and models for organizational change. Additionally, AGILEHAND will design effective training programs to enhance employee skills and knowledge. This information will be obtained through a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. The project will explore how organizations adapt during periods of change and identify factors influencing employee motivation and job satisfaction. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND).Keywords: change management, technology acceptance model, organizational change, health and safety
Procedia PDF Downloads 45934 Mathematical Study of CO₂ Dispersion in Carbonated Water Injection Enhanced Oil Recovery Using Non-Equilibrium 2D Simulator
Authors: Ahmed Abdulrahman, Jalal Foroozesh
Abstract:
CO₂ based enhanced oil recovery (EOR) techniques have gained massive attention from major oil firms since they resolve the industry's two main concerns of CO₂ contribution to the greenhouse effect and the declined oil production. Carbonated water injection (CWI) is a promising EOR technique that promotes safe and economic CO₂ storage; moreover, it mitigates the pitfalls of CO₂ injection, which include low sweep efficiency, early CO₂ breakthrough, and the risk of CO₂ leakage in fractured formations. One of the main challenges that hinder the wide adoption of this EOR technique is the complexity of accurate modeling of the kinetics of CO₂ mass transfer. The mechanisms of CO₂ mass transfer during CWI include the slow and gradual cross-phase CO₂ diffusion from carbonated water (CW) to the oil phase and the CO₂ dispersion (within phase diffusion and mechanical mixing), which affects the oil physical properties and the spatial spreading of CO₂ inside the reservoir. A 2D non-equilibrium compositional simulator has been developed using a fully implicit finite difference approximation. The material balance term (k) was added to the governing equation to account for the slow cross-phase diffusion of CO₂ from CW to the oil within the gird cell. Also, longitudinal and transverse dispersion coefficients have been added to account for CO₂ spatial distribution inside the oil phase. The CO₂-oil diffusion coefficient was calculated using the Sigmund correlation, while a scale-dependent dispersivity was used to calculate CO₂ mechanical mixing. It was found that the CO₂-oil diffusion mechanism has a minor impact on oil recovery, but it tends to increase the amount of CO₂ stored inside the formation and slightly alters the residual oil properties. On the other hand, the mechanical mixing mechanism has a huge impact on CO₂ spatial spreading (accurate prediction of CO₂ production) and the noticeable change in oil physical properties tends to increase the recovery factor. A sensitivity analysis has been done to investigate the effect of formation heterogeneity (porosity, permeability) and injection rate, it was found that the formation heterogeneity tends to increase CO₂ dispersion coefficients, and a low injection rate should be implemented during CWI.Keywords: CO₂ mass transfer, carbonated water injection, CO₂ dispersion, CO₂ diffusion, cross phase CO₂ diffusion, within phase CO2 diffusion, CO₂ mechanical mixing, non-equilibrium simulation
Procedia PDF Downloads 176933 Internet-Of-Things and Ergonomics, Increasing Productivity and Reducing Waste: A Case Study
Authors: V. Jaime Contreras, S. Iliana Nunez, S. Mario Sanchez
Abstract:
Inside a manufacturing facility, we can find innumerable automatic and manual operations, all of which are relevant to the production process. Some of these processes add more value to the products more than others. Manual operations tend to add value to the product since they can be found in the final assembly area o final operations of the process. In this areas, where a mistake or accident can increase the cost of waste exponentially. To reduce or mitigate these costly mistakes, one approach is to rely on automation to eliminate the operator from the production line - requires a hefty investment and development of specialized machinery. In our approach, the center of the solution is the operator through sufficient and adequate instrumentation, real-time reporting and ergonomics. Efficiency and reduced cycle time can be achieved thorough the integration of Internet-of-Things (IoT) ready technologies into assembly operations to enhance the ergonomics of the workstations. Augmented reality visual aids, RFID triggered personalized workstation dimensions and real-time data transfer and reporting can help achieve these goals. In this case study, a standard work cell will be used for real-life data acquisition and a simulation software to extend the data points beyond the test cycle. Three comparison scenarios will run in the work cell. Each scenario will introduce a dimension of the ergonomics to measure its impact independently. Furthermore, the separate test will determine the limitations of the technology and provide a reference for operating costs and investment required. With the ability, to monitor costs, productivity, cycle time and scrap/waste in real-time the ROI (return on investment) can be determined at the different levels to integration. This case study will help to show that ergonomics in the assembly lines can make significant impact when IoT technologies are introduced. Ergonomics can effectively reduce waste and increase productivity with minimal investment if compared with setting up to custom machine.Keywords: augmented reality visual aids, ergonomics, real-time data acquisition and reporting, RFID triggered workstation dimensions
Procedia PDF Downloads 214932 Developing a Systemic Monoclonal Antibody Therapy for the Treatment of Large Burn Injuries
Authors: Alireza Hassanshahi, Xanthe Strudwick, Zlatko Kopecki, Allison J Cowin
Abstract:
Studies have shown that Flightless (Flii) is elevated in human wounds, including burns, and reducing the level of Flii is a promising approach for improving wound repair and reducing scar formation. The most effective approach has been to neutralise Flii activity using localized, intradermal application of function blocking monoclonal antibodies. However, large surface area burns are difficult to treat by intradermal injection of therapeutics, so the aim of this study was to investigate if a systemic injection of a monoclonal antibody against Flii could improve healing in mice following burn injury. Flii neutralizing antibodies (FnAbs) were labelled with Alxa-Fluor-680 for biodistribution studies and the healing effects of systemically administered FnAbs to mice with burn injuries. A partial thickness, 7% (70mm2) total body surface area scald burn injury was created on the dorsal surface of mice (n=10/group), and 100µL of Alexa-Flour-680-labeled FnAbs were injected into the intraperitoneal cavity (IP) at time of injury. The burns were imaged on days 0, 1, 2, 3, 4, and 7 using IVIS Lumina S5 Imaging System, and healing was assessed macroscopically, histologically, and using immunohistochemistry. Fluorescent radiance efficiency measurements showed that IP injected Alexa-Fluor-680-FnAbs localized at the site of burn injury from day 1, remaining there for the whole 7-day study. The burns treated with FnAbs showed a reduction in macroscopic wound area and an increased rate of epithelialization compared to controls. Immunohistochemistry for NIMP-R14 showed a reduction in the inflammatory infiltrate, while CD31/VEGF staining showed improved angiogenesis post-systemic FnAb treatment. These results suggest that systemically administered FnAbs are active within the burn site and can improve healing outcomes. The clinical application of systemically injected Flii monoclonal antibodies could therefore be a potential approach for promoting the healing of large surface area burns immediately after injury.Keywords: biodistribution, burn, flightless, systemic, fnAbs
Procedia PDF Downloads 173931 Electrifying Textile Wastewater Sludge through Up-flow Anaerobic Sludge Blanket Reactor for Sustainable Waste Management
Authors: Tewodros Birhan, Tamrat Tesfaye
Abstract:
Energy supply and waste management are two of humanity's greatest challenges. The world's energy supply primarily relies on fossil fuels, which produce excessive carbon dioxide emissions when burned. When released into the atmosphere in high concentrations, these emissions contribute to global warming. Generating textile wastewater sludge from the Bahir Dar Textile Industry poses significant environmental challenges. This sludge, a byproduct of extensive dyeing and finishing processes, contains a variety of harmful chemicals and heavy metals that can contaminate soil and water resources. This research work explores sustainable waste management strategies, focusing on biogas production from textile wastewater sludge using up-flow anaerobic sludge blanket reactor technology. The objective was to harness biogas, primarily methane, as a renewable energy source while mitigating the environmental impact of textile wastewater disposal. Employing a Central Composite Design approach, experiments were meticulously designed to optimize process parameters. Two key factors, Carbon-to-Nitrogen ratio, and pH, were varied at different levels (20:1 and 25:1 for C: N ratio; 6.8 and 7.6 for pH) to evaluate their influence on methane yield. A 0.4m3 up-flow anaerobic sludge blanket reactor was constructed to facilitate the anaerobic digestion process. Over 26 days, the reactor underwent rigorous testing and monitoring to ascertain its efficiency in biogas production. Meticulous experimentation and data analysis found that the optimal conditions for maximizing methane yield were achieved. Notably, a methane yield of 56.4% was attained, which signifies the effectiveness of the up-flow anaerobic sludge blanket reactor in converting textile wastewater sludge into a valuable energy resource. The findings of this study hold significant implications for both environmental conservation and energy sustainability. Furthermore, the utilization of up-flow anaerobic sludge blanket reactor technology underscores its potential as a viable solution for biogas production from textile wastewater sludge, further promoting the transition towards a circular economy paradigm.Keywords: anaerobic digestion, biogas energy, circular economy, textile sludge, waste-to-energy
Procedia PDF Downloads 4930 Quercetin Nanoparticles and Their Hypoglycemic Effect in a CD1 Mouse Model with Type 2 Diabetes Induced by Streptozotocin and a High-Fat and High-Sugar Diet
Authors: Adriana Garcia-Gurrola, Carlos Adrian Peña Natividad, Ana Laura Martinez Martinez, Alberto Abraham Escobar Puentes, Estefania Ochoa Ruiz, Aracely Serrano Medina, Abraham Wall Medrano, Simon Yobanny Reyes Lopez
Abstract:
Type 2 diabetes mellitus (T2DM) is a metabolic disease characterized by elevated blood glucose levels. Quercetin is a natural flavonoid with a hypoglycemic effect, but reported data are inconsistent due mainly to the structural instability and low solubility of quercetin. Nanoencapsulation is a distinct strategy to overcome the intrinsic limitations of quercetin. Therefore, this work aims to develop a quercetin nano-formulation based on biopolymeric starch nanoparticles to enhance the release and hypoglycemic effect of quercetin in T2DM induced mice model. Starch-quercetin nanoparticles were synthesized using high-intensity ultrasonication, and structural and colloidal properties were determined by FTIR and DLS. For in vivo studies, CD1 male mice (n=25) were divided into five groups (n=5). T2DM was induced using a high-fat and high-sugar diet for 32 weeks and streptozotocin injection. Group 1 consisted of healthy mice fed with a normal diet and water ad libitum; Group 2 were diabetic mice treated with saline solution; Group 3 were diabetic mice treated with glibenclamide; Group 4 were diabetic mice treated with empty nanoparticles; and Group 5 was diabetic mice treated with quercetin nanoparticles. Quercetin nanoparticles had a hydrodynamic size of 232 ± 88.45 nm, a PDI of 0.310 ± 0.04 and a zeta potential of -4 ± 0.85 mV. The encapsulation efficiency of nanoparticles was 58 ± 3.33 %. No significant differences (p = > 0.05) were observed in biochemical parameters (lipids, insulin, and peptide C). Groups 3 and 5 showed a similar hypoglycemic effect, but quercetin nanoparticles showed a longer-lasting effect. Histopathological studies reveal that T2DM mice groups showed degenerated and fatty liver tissue; however, a treated group with quercetin nanoparticles showed liver tissue like that of the healthy mice group. These results demonstrate that quercetin nano-formulations based on starch nanoparticles are effective alternatives with hypoglycemic effects.Keywords: quercetin, diabetes mellitus tipo 2, in vivo study, nanoparticles
Procedia PDF Downloads 35929 Climate Change Impact on Whitefly (Bemisia tabaci) Population Infesting Tomato (Lycopersicon esculentus) in Sub-Himalayan India and Their Sustainable Management Using Biopesticides
Authors: Sunil Kumar Ghosh
Abstract:
Tomato (Lycopersicon esculentus L.) is an annual vegetable crop grown in the sub-Himalayan region of north east India throughout the year except rainy season in normal field cultivation. The crop is susceptible to various insect pests of which whitefly (Bemesia tabaci Genn.) causes heavy damage. Thus, a study on its occurrence and sustainable management is needed for successful cultivation. The pest was active throughout the growing period. During 38th standard week to 41st standard week that is during 3rd week of September to 2nd week of October minimum population was observed. The maximum population level was maintained during 11th standard week to 18th standard week that is during 2nd week of March to 3rd week of March with peak population (0.47/leaf) was recorded. Weekly population counts on white fly showed non-significant negative correlation (p=0.05) with temperature and weekly total rainfall where as significant negative correlation with relative humidity. Eight treatments were taken to study the management of the white fly pest such as botanical insecticide azadirachtin botanical extracts, Spilanthes paniculata flower, Polygonum hydropiper L. flower, tobacco leaf and garlic and mixed formulation like neem and floral extract of Spilanthes were evaluated and compared with the ability of acetamiprid. The insectide acetamiprid was found most lethal against whitefly providing 76.59% suppression, closely followed by extracts of neem + Spilanthes providing 62.39% suppression. Spectophotometric scanning of crude methanolic extract of Polygonum flower showed strong absorbance wave length between 645-675 nm. Considering the level of peaks of wave length the flower extract contain some important chemicals like Spirilloxanthin, Quercentin diglycoside, Quercentin 3-O-rutinoside, Procyanidin B1 and Isorhamnetin 3-O-rutinoside. These chemicals are responsible for pest control. Spectophotometric scanning of crude methanolic extract of Spilanthes flower showed strong absorbance wave length between 645-675 nm. Considering the level of peaks of wave length the flower extract contain some important chemicals of which polysulphide compounds are important and responsible of pest control. Neem and Spilanthes individually did not produce good results but when used as a mixture they recorded better results. Highest yield (30.15 t/ha) were recorded from acetamiprid treated plots followed by neem + Spilanthes (27.55 t/ha). Azadirachtin and Plant extracts are biopesticides having less or no hazardous effects on human health and environment. Thus they can be incorporated in IPM programmes and organic farming in vegetable cultivation.Keywords: biopesticides, organic farming, seasonal fluctuation, vegetable IPM
Procedia PDF Downloads 309928 Influence of Genotype, Explant, and Hormone Treatment on Agrobacterium-Transformation Success in Salix Callus Culture
Authors: Lukas J. Evans, Danilo D. Fernando
Abstract:
Shrub willows (Salix spp.) have many characteristics which make them suitable for a variety of applications such as riparian zone buffers, environmental contaminant sequestration, living snow fences, and biofuel production. In some cases, these functions are limited due to physical or financial obstacles associated with the number of individuals needed to reasonably satisfy that purpose. One way to increase the efficiency of willows is to bioengineer them with the genetic improvements suitable for the desired use. To accomplish this goal, an optimized in vitro transformation protocol via Agrobacterium tumefaciens is necessary to reliably express genes of interest. Therefore, the aim of this study is to observe the influence of tissue culture with different willow cultivars, hormones, and explants on the percentage of calli expressing reporter gene green florescent protein (GFP) to find ideal transformation conditions. Each callus was produced from 1 month old open-pollinated seedlings of three Salix miyabeana cultivars (‘SX61’, ‘WT1’, and ‘WT2’) from three different explants (lamina, petiole, and internodes). Explants were cultured for 1 month on an MS media with different concentrations of 6-Benzylaminopurine (BAP) and 1-Naphthaleneacetic acid (NAA) (No hormones, 1 mg⁻¹L BAP only, 3 mg⁻¹L NAA only, 1 mg⁻¹L BAP and 3 mg⁻¹L NAA, and 3 mg⁻¹L BAP and 1 mg⁻¹L NAA) to produce a callus. Samples were then treated with Agrobacterium tumefaciens at an OD600 of 0.6-0.8 to insert the transgene GFP for 30 minutes, co-cultivated for 72 hours, and selected on the same media type they were cultured on with added 7.5 mg⁻¹L of Hygromycin for 1 week before GFP visualization under a UV dissecting scope. Percentage of GFP expressing calli as well as the average number of fluorescing GFP units per callus were recorded and results were evaluated through an ANOVA test (α = 0.05). The WT1 internode-derived calli on media with 3 mg-1L NAA+1 mg⁻¹L BAP and mg⁻¹L BAP alone produced a significantly higher percentage of GFP expressing calli than each other group (19.1% and 19.4%, respectively). Additionally, The WT1 internode group cultured with 3 mg⁻¹L NAA+1 mg⁻¹L BAP produced an average of 2.89 GFP units per callus while the group cultivated with 1 mg⁻¹L BAP produced an average of 0.84 GFP units per callus. In conclusion, genotype, explant choice, and hormones all play a significant role in increasing successful transformation in willows. Future studies to produce whole callus GFP expression and subsequent plantlet regeneration are necessary for a complete willow transformation protocol.Keywords: agrobacterium, callus, Salix, tissue culture
Procedia PDF Downloads 123927 Vortex Control by a Downstream Splitter Plate in Psudoplastic Fluid Flow
Authors: Sudipto Sarkar, Anamika Paul
Abstract:
Pseudoplastic (n<1, n is the power index) fluids have great importance in food, pharmaceutical and chemical process industries which require a lot of attention. Unfortunately, due to its complex flow behavior inadequate research works can be found even in laminar flow regime. A practical problem is solved in the present research work by numerical simulation where we tried to control the vortex shedding from a square cylinder using a horizontal splitter plate placed at the downstream flow region. The position of the plate is at the centerline of the cylinder with varying distance from the cylinder to calculate the critical gap-ratio. If the plate is placed inside this critical gap, the vortex shedding from the cylinder suppressed completely. The Reynolds number considered here is in unsteady laminar vortex shedding regime, Re = 100 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid). Flow behavior has been studied for three different gap-ratios (G/a = 2, 2.25 and 2.5, where G is the gap between cylinder and plate) and for a fluid with three different flow behavior indices (n =1, 0.8 and 0.5). The flow domain is constructed using Gambit 2.2.30 and this software is also used to generate the mesh and to impose the boundary conditions. For G/a = 2, the domain size is considered as 37.5a × 16a with 316 × 208 grid points in the streamwise and flow-normal directions respectively after a thorough grid independent study. Fine and equal grid spacing is used close to the geometry to capture the vortices shed from the cylinder and the boundary layer developed over the flat plate. Away from the geometry meshes are unequal in size and stretched out. For other gap-ratios, proportionate domain size and total grid points are used with similar kind of mesh distribution. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain boundary conditions are used for the simulation. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. Discretized forms of fully conservative 2-D unsteady Navier Stokes equations are then solved by Ansys Fluent 14.5. SIMPLE algorithm written in finite volume method is selected for this purpose which is a default solver inculcate in Fluent. The results obtained for Newtonian fluid flow agree well with previous works supporting Fluent’s usefulness in academic research. A thorough analysis of instantaneous and time-averaged flow fields are depicted both for Newtonian and pseudoplastic fluid flow. It has been observed that as the value of n reduces the stretching of shear layers also reduce and these layers try to roll up before the plate. For flow with high pseudoplasticity (n = 0.5) the nature of vortex shedding changes and the value of critical gap-ratio reduces. These are the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.Keywords: CFD, pseudoplastic fluid flow, wake-boundary layer interactions, critical gap-ratio
Procedia PDF Downloads 111926 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 118925 A Case Study of Determining the Times of Overhauls and the Number of Spare Parts for Repairable Items in Rolling Stocks with Simulation
Authors: Ji Young Lee, Jong Woon Kim
Abstract:
It is essential to secure high availability of railway vehicles to realize high quality and efficiency of railway service. Once the availability decreased, planned railway service could not be provided or more cars need to be reserved. additional cars need to be purchased or the frequency of railway service could be decreased. Such situation would be a big loss in terms of quality and cost related to railway service. Therefore, we make various efforts to get high availability of railway vehicles. Because it is a big loss to operators, we make various efforts to get high availability of railway vehicles. To secure high availability, the idle time of the vehicle needs to be reduced and the following methods are applied to railway vehicles. First, through modularization design, exchange time for line replaceable units is reduced which makes railway vehicles could be put into the service quickly. Second, to reduce periodic preventive maintenance time, preventive maintenance with short period would be proceeded test oriented to minimize the maintenance time, and reliability is secured through overhauls for each main component. With such design changes for railway vehicles, modularized components are exchanged first at the time of vehicle failure or overhaul so that vehicles could be put into the service quickly and exchanged components are repaired or overhauled. Therefore, spare components are required for any future failures or overhauls. And, as components are modularized and costs for components are high, it is considerably important to get reasonable quantities of spare components. Especially, when a number of railway vehicles were put into the service simultaneously, the time of overhauls come almost at the same time. Thus, for some vehicles, components need to be exchanged and overhauled before appointed overhaul period so that these components could be secured as spare parts for the next vehicle’s component overhaul. For this reason, components overhaul time and spare parts quantities should be decided at the same time. This study deals with the time of overhauls for repairable components of railway vehicles and the calculation of spare parts quantities in consideration of future failure/overhauls. However, as railway vehicles are used according to the service schedule, maintenance work cannot be proceeded after the service was closed thus it is quite difficult to resolve this situation mathematically. In this study, Simulation software system is used in this study for analyzing the time of overhauls for repairable components of railway vehicles and the spare parts for the railway systems.Keywords: overhaul time, rolling stocks, simulation, spare parts
Procedia PDF Downloads 337924 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis
Authors: Jui-Teng Liao
Abstract:
The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format
Procedia PDF Downloads 88923 Money Laundering and Terror Financing in the Islamic Banking Sector in Bangladesh
Authors: Md. Abdul Kader
Abstract:
Several reports released by Global Financial Integrity (GFI) in recent times have identified Bangladesh as being among the worst affected countries to the scourge of money laundering (ML) and terrorist financing (TF). The money laundering (ML) and terrorist financing (TF) risks associated with conventional finance are generally well identified and understood by the relevant national authorities. There is, however, no common understanding of ML/TF risks associated with Islamic Banking. This paper attempts to examine the issues of money laundering (ML) and terrorist financing (TF) in Islamic Banks of Bangladesh. This study also investigates the risk factors associated with Islamic Banking system of Bangladesh that are favorable for ML and TF and which prevent the government to control such issues in the Islamic Banks of Bangladesh. Qualitative research methods were employed by studying various reports from journals, newspapers, bank reports and periodicals. In addition, five ex-bankers who were in the policy making bodies of three Islamic Banks were also interviewed. Findings suggest that government policies regarding Islamic Banking system in Bangladesh are not well defined and clear. Shariah law, that is the guiding principle of Islamic Banking, is not well recognized by the government policy makers, and thus they left the responsibility to the governing bodies of the banks. Other challenges that were found in the study are: the complexity of some Islamic banking products, the different forms of relationship between the banks and their clients, the inadequate ability and skill in the supervision of Islamic finance, particularly in jurisdictions, to evaluate their activities. All these risk factors paved the ground for ML and TF in the Islamic Banks of Bangladesh. However, due to unconventional nature of Banking and lack of investigative reporting on Islamic Banking, this study could not cover the whole picture of the ML/TF of Islamic Banks of Bangladesh. However, both qualitative documents and interviewees confirmed that Islamic Banking in Bangladesh could be branded as risky when it comes to money laundering and terror financing. This study recommends that the central bank authorities who supervise Islamic finance and the government policy makers should obtain a greater understanding of the specific ML/TF risks that may arise in Islamic Banks and develop a proper response. The study findings are expected to considerably impact Islamic banking management and policymakers to develop strong and appropriate policy to enhance transparency, accountability, and efficiency in banking sector. The regulatory bodies can consider the findings to disseminate anti money laundering and terror financing related rules and regulations.Keywords: money laundering, terror financing, islamic banking, bangladesh
Procedia PDF Downloads 95922 Application of Unstructured Mesh Modeling in Evolving SGE of an Airport at the Confluence of Multiple Rivers in a Macro Tidal Region
Authors: A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Among the various developing countries in the world like China, Malaysia, Korea etc., India is also developing its infrastructures in the form of Road/Rail/Airports and Waterborne facilities at an exponential rate. Mumbai, the financial epicenter of India is overcrowded and to relieve the pressure of congestion, Navi Mumbai suburb is being developed on the east bank of Thane creek near Mumbai. The government due to limited space at existing Mumbai Airports (domestic and international) to cater for the future demand of airborne traffic, proposes to build a new international airport near Panvel at Navi Mumbai. Considering the precedence of extreme rainfall on 26th July 2005 and nearby townships being in a low-lying area, wherein new airport is proposed, it is inevitable to study this complex confluence area from a hydrodynamic consideration under both tidal and extreme events (predicted discharge hydrographs), to avoid inundation of the surrounding due to the proposed airport reclamation (1160 hectares) and to determine the safe grade elevation (SGE). The model studies conducted using the application of unstructured mesh to simulate the Panvel estuarine area (93 km2), calibration, validation of a model for hydraulic field measurements and determine the maxima water levels around the airport for various extreme hydrodynamic events, namely the simultaneous occurrence of highest tide from the Arabian Sea and peak flood discharges (Probable Maximum Precipitation and 26th July 2005) from five rivers, the Gadhi, Kalundri, Taloja, Kasadi and Ulwe, meeting at the proposed airport area revealed that: (a) The Ulwe River flowing beneath the proposed airport needs to be diverted. The 120m wide proposed Ulwe diversion channel having a wider base width of 200 m at SH-54 Bridge on the Ulwe River along with the removal of the existing bund in Moha Creek is inevitable to keep the SGE of the airport to a minimum. (b) The clear waterway of 80 m at SH-54 Bridge (Ulwe River) and 120 m at Amra Marg Bridge near Moha Creek is also essential for the Ulwe diversion and (c) The river bank protection works on the right bank of Gadhi River between the NH-4B and SH-54 bridges as well as upstream of the Ulwe River diversion channel are essential to avoid inundation of low lying areas. The maxima water levels predicted around the airport keeps SGE to a minimum of 11m with respect to Chart datum of Ulwe Bundar and thus development is not only technologically-economically feasible but also sustainable. The unstructured mesh modeling is a promising tool to simulate complex extreme hydrodynamic events and provides a reliable solution to evolve optimal SGE of airport.Keywords: airport, hydrodynamics, safe grade elevation, tides
Procedia PDF Downloads 261921 Survey of Prevalence of Noise Induced Hearing Loss in Hawkers and Shopkeepers in Noisy Areas of Mumbai City
Authors: Hitesh Kshayap, Shantanu Arya, Ajay Basod, Sachin Sakhuja
Abstract:
This study was undertaken to measure the overall noise levels in different locations/zones and to estimate the prevalence of Noise induced hearing loss in Hawkers & Shopkeepers in Mumbai, India. The Hearing Test developed by American Academy Of Otolaryngology, translated from English to Hindi, and validated is used as a screening tool for hearing sensitivity was employed. The tool is having 14 items. Each item is scored on a scale 0, 1, 2 and 3. The score 6 and above indicated some difficulty or definite difficulty in hearing in daily activities and low score indicated lesser difficulty or normal hearing. The subjects who scored 6 or above or having tinnitus were made to undergo hearing evaluation by Pure tone audiometer. Further, the environmental noise levels were measured from Morning to Evening at road side at different Location/Hawking zones in Mumbai city using SLM9 Agronic 8928B & K type Digital Sound Level Meter) in dB (A). The maximum noise level of 100.0 dB (A) was recorded during evening hours from Chattrapati Shivaji Terminal to Colaba with overall noise level of 79.0 dB (A). However, the minimum noise level in this area was 72.6 dB (A) at any given point of time. Further, 54.6 dB (A) was recorded as minimum noise level during 8-9 am at Sion Circle. Further, commencement of flyovers with 2-tier traffic, sky walks, increasing number of vehicular traffic at road, high rise buildings and other commercial & urbanization activities in the Mumbai city most probably have resulted in increasing the overall environmental noise levels. Trees which acted as noise absorbers have been cut owing to rapid construction. The study involved 100 participants in the age range of 18 to 40 years of age, with the mean age of 29 years (S.D. =6.49). 46 participants having tinnitus or have obtained the score of 6 were made to undergo Pure Tone Audiometry and it was found that the prevalence rate of hearing loss in hawkers & shopkeepers is 19% (10% Hawkers and 9 % Shopkeepers). The results found indicates that 29 (42.6%) out of 64 Hawkers and 17 (47.2%) out of 36 Shopkeepers who underwent PTA had no significant difference in percentage of Noise Induced Hearing loss. The study results also reveal that participants who exhibited tinnitus 19 (41.30%) out of 46 were having mild to moderate sensorineural hearing loss between 3000Hz to 6000Hz. The Pure tone Audiogram pattern revealed Hearing loss at 4000 Hz and 6000 Hz while hearing at adjacent frequencies were nearly normal. 7 hawkers and 8 shopkeepers had mild notch while 3 hawkers and 1 shopkeeper had a moderate degree of notch. It is thus inferred that tinnitus is a strong indicator for presence of hearing loss and 4/6 KHz notch is a strong marker for road/traffic/ environmental noise as an occupational hazard for hawkers and shopkeepers. Mass awareness about these occupational hazards, regular hearing check up, early intervention along with sustainable development juxtaposed with social and urban forestry can help in this regard.Keywords: NIHL, noise, sound level meter, tinnitus
Procedia PDF Downloads 202920 Stretchable and Flexible Thermoelectric Polymer Composites for Self-Powered Volatile Organic Compound Vapors Detection
Authors: Petr Slobodian, Pavel Riha, Jiri Matyas, Robert Olejnik, Nuri Karakurt
Abstract:
Thermoelectric devices generate an electrical current when there is a temperature gradient between the hot and cold junctions of two dissimilar conductive materials typically n-type and p-type semiconductors. Consequently, also the polymeric semiconductors composed of polymeric matrix filled by different forms of carbon nanotubes with proper structural hierarchy can have thermoelectric properties which temperature difference transfer into electricity. In spite of lower thermoelectric efficiency of polymeric thermoelectrics in terms of the figure of merit, the properties as stretchability, flexibility, lightweight, low thermal conductivity, easy processing, and low manufacturing cost are advantages in many technological and ecological applications. Polyethylene-octene copolymer based highly elastic composites filled with multi-walled carbon nanotubes (MWCTs) were prepared by sonication of nanotube dispersion in a copolymer solution followed by their precipitation pouring into non-solvent. The electronic properties of MWCNTs were moderated by different treatment techniques such as chemical oxidation, decoration by Ag clusters or addition of low molecular dopants. In this concept, for example, the amounts of oxygenated functional groups attached on MWCNT surface by HNO₃ oxidation increase p-type charge carriers. p-type of charge carriers can be further increased by doping with molecules of triphenylphosphine. For partial altering p-type MWCNTs into less p-type ones, Ag nanoparticles were deposited on MWCNT surface and then doped with 7,7,8,8-tetracyanoquino-dimethane. Both types of MWCNTs with the highest difference in generated thermoelectric power were combined to manufacture polymeric based thermoelectric module generating thermoelectric voltage when the temperature difference is applied between hot and cold ends of the module. Moreover, it was found that the generated voltage by the thermoelectric module at constant temperature gradient was significantly affected when exposed to vapors of different volatile organic compounds representing then a self-powered thermoelectric sensor for chemical vapor detection.Keywords: carbon nanotubes, polymer composites, thermoelectric materials, self-powered gas sensor
Procedia PDF Downloads 153919 Electrocatalytic Properties of Ru-Pd Bimetal Quantum Dots/TiO₂ Nanotube Arrays Electrodes Composites with Double Schottky Junctions
Authors: Shiying Fan, Xinyong Li
Abstract:
The development of highly efficient multifunctional catalytic materials towards HER, ORR and Photo-fuel cell applications in terms of combined electrochemical and photo-electrochemical principles have currently confronted with dire challenges. In this study, novel palladium (Pd) and ruthenium (Ru) Bimetal Quantum Dots (BQDs) co-anchored on Titania nanotube (NTs) arrays electrodes have been successfully constructed by facial two-step electrochemical strategy. Double Schottky junctions with superior performance in electrocatalytic (EC) hydrogen generations and solar fuel cell energy conversions (PE) have been found. Various physicochemical techniques including UV-vis spectroscopy, TEM/EDX/HRTEM, SPV/TRV and electro-chemical strategy including EIS, C-V, I-V, and I-T, etc. were chronically utilized to systematically characterize the crystal-, electronic and micro-interfacial structures of the composites with double Schottky junction, respectively. The characterizations have implied that the marvelous enhancement of separation efficiency of electron-hole pairs generations is mainly caused by the Schottky-barriers within the nanocomposites, which would greatly facilitate the interfacial charge transfer for H₂ generations and solar fuel cell energy conversions. Moreover, the DFT calculations clearly indicated that the oriented growth of Ru and Pd bimetal atoms at the anatase (101) surface is mainly driven by the interaction between Ru/Pd and surface atoms, and the most active site for bimetal Ru and Pd adatoms on the perfect TiO₂ (101) surface is the 2cO-6cTi-3cO bridge sites and the 2cO-bridge sites with the highest adsorption energy of 9.17 eV. Furthermore, the electronic calculations show that in the nanocomposites, the number of impurity (i.e., co-anchored Ru-Pd BQDs) energy levels near Fermi surface increased and some were overlapped with original energy level, promoting electron energy transition and reduces the band gap. Therefore, this work shall provide a deeper insight for the molecular design of Bimetal Quantum Dots (BQDs) assembled onto Tatiana NTs composites with superior performance for electrocatalytic hydrogen productions and solar fuel cell energy conversions (PE) simultaneously.Keywords: eletrocatalytic, Ru-Pd bimetallic quantum dots, titania nanotube arrays, double Schottky junctions, hydrogen production
Procedia PDF Downloads 143918 The Hidden Mechanism beyond Ginger (Zingiber officinale Rosc.) Potent in vivo and in vitro Anti-Inflammatory Activity
Authors: Shahira M. Ezzat, Marwa I. Ezzat, Mona M. Okba, Esther T. Menze, Ashraf B. Abdel-Naim, Shahnas O. Mohamed
Abstract:
Background: In order to decrease the burden of the high cost of synthetic drugs, it is important to focus on phytopharmaceuticals. The aim of our study was to search for the mechanism of ginger (Zingiber officinale Roscoe) anti-inflammatory potential and to correlate it to its biophytochemicals. Methods: Various extracts viz. water, 50%, 70%, 80%, and 90% ethanol were prepared from ginger rhizomes. Fractionation of the aqueous extract (AE) was accomplished using Diaion HP-20. In vitro anti-inflammatory activity of the different extracts and isolated compounds was evaluated by protein denaturation inhibition, membrane stabilization, protease inhibition, and anti-lipoxygenase assays. In vivo anti-inflammatory activity of AE was estimated by assessment of rat paw oedema after carrageenan injection. Prostaglandin E2 (PGE2), certain inflammation markers (TNF-α, IL-6, IL-1α, IL-1β, INFr, MCP-1MIP, RANTES, and Nox) levels and MPO activity in the paw edema exudates were measured. Total antioxidant capacity (TAC) was also determined. Histopathological alterations of paw tissues were scored. Results: All the tested extracts showed significant (p < 0.1) anti-inflammatory activities. The highest percentage of heat induced albumin denaturation (66%) was exhibited by the 50% ethanol (250 μg/ml). The 70 and 90% ethanol extracts (500 μg/ml) were more potent as membrane stabilizers (34.5 and 37%, respectively) than diclofenac (33%). The 80 and 90% ethanol extracts (500 μg/ml) showed maximum protease inhibition (56%). The strongest anti-lipoxygenase activity was observed for the AE. It showed more significant lipoxygenase inhibition activity than that of diclofenac (58% and 52%, respectively) at the same concentration (125 μg/ml). Fractionation of AE yielded four main fractions (Fr I-IV) which showed significant in vitro anti-inflammatory. Purification of Fr-III and IV led to the isolation of 6-poradol (G1), 6-shogaol (G2); methyl 6- gingerol (G3), 5-gingerol (G4), 6-gingerol (G5), 8-gingerol (G6), 10-gingerol (G7), and 1-dehydro-6-gingerol (G8). G2 (62.5 ug/ml), G1 (250 ug/ml), and G8 (250 ug/ml) exhibited potent anti-inflammatory activity in all studied assays, while G4 and G5 exhibited moderate activity. In vivo administration of AE ameliorated rat paw oedema in a dose-dependent manner. AE (at 200 mg/kg) showed significant reduction (60%) of PGE2 production. The AE at different doses (at 25-200 mg/kg) showed significant reduction in inflammatory markers except for IL-1α. AE (at 25 mg/kg) is superior to indomethacin in reduction of IL-1β. Treatment of animals with the AE (100, 200 mg/kg) or indomethacin (10 mg/kg) showed significant reduction in TNF-α, IL-6, MCP-1, and RANTES levels, and MPO activity by about (31, 57 and 32% ) (65, 60 and 57%) (27, 41 and 28%) (23, 32 and 23%) (66, 67 and 67%) respectively. AE at 100 and 200 mg/kg was equipotent to indomethacin in reduction of NOₓ level and in increasing the TAC. Histopathological examination revealed very few inflammatory cells infiltration and oedema after administration of AE (200 mg/kg) prior to carrageenan. Conclusion: Ginger anti-inflammatory activity is mediated by inhibiting macrophage and neutrophils activation as well as negatively affecting monocyte and leukocyte migration. Moreover, it produced dose-dependent decrease in pro-inflammatory cytokines and chemokines and replenished the total antioxidant capacity. We strongly recommend future investigations of ginger in the potential signal transduction pathways.Keywords: anti-lipoxygenase activity, inflammatory markers, 1-dehydro-6-gingerol, 6-shogaol
Procedia PDF Downloads 253917 Exploring the Use of Augmented Reality for Laboratory Lectures in Distance Learning
Authors: Michele Gattullo, Vito M. Manghisi, Alessandro Evangelista, Enricoandrea Laviola
Abstract:
In this work, we explored the use of Augmented Reality (AR) to support students in laboratory lectures in Distance Learning (DL), designing an application that proved to be ready for use next semester. AR could help students in the understanding of complex concepts as well as increase their motivation in the learning process. However, despite many prototypes in the literature, it is still less used in schools and universities. This is mainly due to the perceived limited advantages to the investment costs, especially regarding changes needed in the teaching modalities. However, with the spread of epidemiological emergency due to SARS-CoV-2, schools and universities were forced to a very rapid redefinition of consolidated processes towards forms of Distance Learning. Despite its many advantages, it suffers from the impossibility to carry out practical activities that are of crucial importance in STEM ("Science, Technology, Engineering e Math") didactics. In this context, AR perceived advantages increased a lot since teachers are more prepared for new teaching modalities, exploiting AR that allows students to carry on practical activities on their own instead of being physically present in laboratories. In this work, we designed an AR application for the support of engineering students in the understanding of assembly drawings of complex machines. Traditionally, this skill is acquired in the first years of the bachelor's degree in industrial engineering, through laboratory activities where the teacher shows the corresponding components (e.g., bearings, screws, shafts) in a real machine and their representation in the assembly drawing. This research aims to explore the effectiveness of AR to allow students to acquire this skill on their own without physically being in the laboratory. In a preliminary phase, we interviewed students to understand the main issues in the learning of this subject. This survey revealed that students had difficulty identifying machine components in an assembly drawing, matching between the 2D representation of a component and its real shape, and understanding the functionality of a component within the machine. We developed a mobile application using Unity3D, aiming to solve the mentioned issues. We designed the application in collaboration with the course professors. Natural feature tracking was used to associate the 2D printed assembly drawing with the corresponding 3D virtual model. The application can be displayed on students’ tablets or smartphones. Users could interact with selecting a component from a part list on the device. Then, 3D representations of components appear on the printed drawing, coupled with 3D virtual labels for their location and identification. Users could also interact with watching a 3D animation to learn how components are assembled. Students evaluated the application through a questionnaire based on the System Usability Scale (SUS). The survey was provided to 15 students selected among those we participated in the preliminary interview. The mean SUS score was 83 (SD 12.9) over a maximum of 100, allowing teachers to use the AR application in their courses. Another important finding is that almost all the students revealed that this application would provide significant power for comprehension on their own.Keywords: augmented reality, distance learning, STEM didactics, technology in education
Procedia PDF Downloads 128916 Targeting APP IRE mRNA to Combat Amyloid -β Protein Expression in Alzheimer’s Disease
Authors: Mateen A Khan, Taj Mohammad, Md. Imtaiyaz Hassan
Abstract:
Alzheimer’s disease is characterized by the accumulation of the processing products of the amyloid beta peptide cleaved by amyloid precursor protein (APP). Iron increases the synthesis of amyloid beta peptides, which is why iron is present in Alzheimer's disease patients' amyloid plaques. Iron misregulation in the brain is linked to the overexpression of APP protein, which is directly related to amyloid-β aggregation in Alzheimer’s disease. The APP 5'-UTR region encodes a functional iron-responsive element (IRE) stem-loop that represents a potential target for modulating amyloid production. Targeted regulation of APP gene expression through the modulation of 5’-UTR sequence function represents a novel approach for the potential treatment of AD because altering APP translation can be used to improve both the protective brain iron balance and provide anti-amyloid efficacy. The molecular docking analysis of APP IRE RNA with eukaryotic translation initiation factors yields several models exhibiting substantial binding affinity. The finding revealed that the interaction involved a set of functionally active residues within the binding sites of eIF4F. Notably, APP IRE RNA and eIF4F interaction were stabilized by multiple hydrogen bonds with residues of APP IRE RNA and eIF4F. It was evident that APP IRE RNA exhibited a structural complementarity that tightly fit within binding pockets of eIF4F. The simulation studies further revealed the stability of the complexes formed between RNA and eIF4F, which is crucial for assessing the strength of these interactions and subsequent roles in the pathophysiology of Alzheimer’s disease. In addition, MD simulations would capture conformational changes in the IRE RNA and protein molecules during their interactions, illustrating the mechanism of interaction, conformational change, and unbinding events and how it may affect aggregation propensity and subsequent therapeutic implications. Our binding studies correlated well with the translation efficiency of APP mRNA. Overall, the outcome of this study suggests that the genomic modification and/or inhibiting the expression of amyloid protein by targeting APP IRE RNA can be a viable strategy to identify potential therapeutic targets for AD and subsequently be exploited for developing novel therapeutic approaches.Keywords: Alzheimer's disease, Protein-RNA interaction analysis, molecular docking simulations, conformational dynamics, binding stability, binding kinetics, protein synthesis.
Procedia PDF Downloads 64915 Cost Overruns in Mega Projects: Project Progress Prediction with Probabilistic Methods
Authors: Yasaman Ashrafi, Stephen Kajewski, Annastiina Silvennoinen, Madhav Nepal
Abstract:
Mega projects either in construction, urban development or energy sectors are one of the key drivers that build the foundation of wealth and modern civilizations in regions and nations. Such projects require economic justification and substantial capital investment, often derived from individual and corporate investors as well as governments. Cost overruns and time delays in these mega projects demands a new approach to more accurately predict project costs and establish realistic financial plans. The significance of this paper is that the cost efficiency of megaprojects will improve and decrease cost overruns. This research will assist Project Managers (PMs) to make timely and appropriate decisions about both cost and outcomes of ongoing projects. This research, therefore, examines the oil and gas industry where most mega projects apply the classic methods of Cost Performance Index (CPI) and Schedule Performance Index (SPI) and rely on project data to forecast cost and time. Because these projects are always overrun in cost and time even at the early phase of the project, the probabilistic methods of Monte Carlo Simulation (MCS) and Bayesian Adaptive Forecasting method were used to predict project cost at completion of projects. The current theoretical and mathematical models which forecast the total expected cost and project completion date, during the execution phase of an ongoing project will be evaluated. Earned Value Management (EVM) method is unable to predict cost at completion of a project accurately due to the lack of enough detailed project information especially in the early phase of the project. During the project execution phase, the Bayesian adaptive forecasting method incorporates predictions into the actual performance data from earned value management and revises pre-project cost estimates, making full use of the available information. The outcome of this research is to improve the accuracy of both cost prediction and final duration. This research will provide a warning method to identify when current project performance deviates from planned performance and crates an unacceptable gap between preliminary planning and actual performance. This warning method will support project managers to take corrective actions on time.Keywords: cost forecasting, earned value management, project control, project management, risk analysis, simulation
Procedia PDF Downloads 403914 Calibration and Validation of the Aquacrop Model for Simulating Growth and Yield of Rain-fed Sesame (Sesamum indicum L.) Under Different Soil Fertility Levels in the Semi-arid Areas of Tigray
Authors: Abadi Berhane, Walelign Worku, Berhanu Abrha, Gebre Hadgu, Tigray
Abstract:
Sesame is an important oilseed crop in Ethiopia; which is the second most exported agricultural commodity next to coffee. However, there is poor soil fertility management and a research-led farming system for the crop. The AquaCrop model was applied as a decision-support tool; which performs a semi-quantitative approach to simulate the yield of crops under different soil fertility levels. The objective of this experiment was to calibrate and validated the AquaCrop model for simulating the growth and yield of sesame under different nitrogen fertilizer levels and to test the performance of the model as a decision-support tool for improved sesame cultivation in the study area. The experiment was laid out as a randomized complete block design (RCBD) in a factorial arrangement in the 2016, 2017, and 2018 main cropping seasons. In this experiment, four nitrogen fertilizer rates; 0, 23, 46, and 69 Kg/ha nitrogen, and three improved varieties (Setit-1, Setit-2, and Humera-1). In the meantime, growth, yield, and yield components of sesame were collected from each treatment. Coefficient of determination (R2), Root mean square error (RMSE), Normalized root mean square error (N-RMSE), Model efficiency (E), and Degree of agreement (D) were used to test the performance of the model. The results indicated that the AquaCrop model successfully simulated soil water content with R2 varying from 0.92 to 0.98, RMSE 6.5 to 13.9 mm, E 0.78 to 0.94, and D 0.95 to 0.99; and the corresponding values for AB also varied from 0.92 to 0.98, 0.33 to 0.54 tons/ha, 0.74 to 0.93, and 0.9 to 0.98, respectively. The results on the canopy cover of sesame also showed that the model acceptably simulated canopy cover with R2 varying from 0.95 to 0.99, and a RMSE of 5.3 to 8.6%. The AquaCrop model was appropriately calibrated to simulate soil water content, canopy cover, aboveground biomass, and sesame yield; the results indicated that the model adequately simulated the growth and yield of sesame under the different nitrogen fertilizer levels. The AquaCrop model might be an important tool for improved soil fertility management and yield enhancement strategies of sesame. Hence, the model might be applied as a decision-support tool in soil fertility management in sesame production.Keywords: aquacrop model, sesame, normalized water productivity, nitrogen fertilizer
Procedia PDF Downloads 75913 The Impact of Mergers and Acquisitions on Financial Deepening in the Nigerian Banking Sector
Authors: Onyinyechi Joy Kingdom
Abstract:
Mergers and Acquisitions (M&A) have been proposed as a mechanism through which, problems associated with inefficiency or poor performance in financial institution could be addressed. The aim of this study is to examine the proposition that recapitalization of banks, which encouraged Mergers and Acquisitions in Nigeria banking system, would strengthen the domestic banks, improve financial deepening and the confidence of depositors. Hence, this study examines the impact of the 2005 M&A in the Nigerian-banking sector on financial deepening using mixed method (quantitative and qualitative approach). The quantitative process of this study utilised annual time series for financial deepening indicator for the period of 1997 to 2012. While, the qualitative aspect adopted semi-structured interview to collate data from three merged banks and three stand-alone banks to explore, understand and complement the quantitative results. Furthermore, a framework thematic analysis is employed to analyse the themes developed using NVivo 11 software. Using the quantitative approach, findings from the equality of mean test (EMT) used suggests that M&A have significant impact on financial deepening. However, this method is not robust enough given its weak validity as it does not control for other potential factors that may determine financial deepening. Thus, to control for other factors that may affect the level of financial deepening, a Multiple Regression Model (MRM) and Interrupted Times Series Analysis (ITSA) were applied. The coefficient for M&A dummy turned negative and insignificant using MRM. In addition, the estimated linear trend of the post intervention when ITSA was applied suggests that after M&A, the level of financial deepening decreased annually; however, this was statistically insignificant. Similarly, using the qualitative approach, the results from the interview supported the quantitative results from ITSA and MRM. The result suggests that interest rate should fall when capital base is increased to improve financial deepening. Hence, this study contributes to the existing literature the importance of other factors that may affect financial deepening and the economy when policies that will enhance bank performance and the economy are made. In addition, this study will enable the use of valuable policy instruments relevant to monetary authorities when formulating policies that will strengthen the Nigerian banking sector and the economy.Keywords: mergers and acquisitions, recapitalization, financial deepening, efficiency, financial crisis
Procedia PDF Downloads 398912 Construction of a Dynamic Migration Model of Extracellular Fluid in Brain for Future Integrated Control of Brain State
Authors: Tomohiko Utsuki, Kyoka Sato
Abstract:
In emergency medicine, it is recognized that brain resuscitation is very important for the reduction of mortality rate and neurological sequelae. Especially, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) are most required for stabilizing brain’s physiological state in the treatment for such as brain injury, stroke, and encephalopathy. However, the manual control of BT, ICP, and CBF frequently requires the decision and operation of medical staff, relevant to medication and the setting of therapeutic apparatus. Thus, the integration and the automation of the control of those is very effective for not only improving therapeutic effect but also reducing staff burden and medical cost. For realizing such integration and automation, a mathematical model of brain physiological state is necessary as the controlled object in simulations, because the performance test of a prototype of the control system using patients is not ethically allowed. A model of cerebral blood circulation has already been constructed, which is the most basic part of brain physiological state. Also, a migration model of extracellular fluid in brain has been constructed, however the condition that the total volume of intracranial cavity is almost changeless due to the hardness of cranial bone has not been considered in that model. Therefore, in this research, the dynamic migration model of extracellular fluid in brain was constructed on the consideration of the changelessness of intracranial cavity’s total volume. This model is connectable to the cerebral blood circulation model. The constructed model consists of fourteen compartments, twelve of which corresponds to perfused area of bilateral anterior, middle and posterior cerebral arteries, the others corresponds to cerebral ventricles and subarachnoid space. This model enable to calculate the migration of tissue fluid from capillaries to gray matter and white matter, the flow of tissue fluid between compartments, the production and absorption of cerebrospinal fluid at choroid plexus and arachnoid granulation, and the production of metabolic water. Further, the volume, the colloid concentration, and the tissue pressure of/in each compartment are also calculable by solving 40-dimensional non-linear simultaneous differential equations. In this research, the obtained model was analyzed for its validation under the four condition of a normal adult, an adult with higher cerebral capillary pressure, an adult with lower cerebral capillary pressure, and an adult with lower colloid concentration in cerebral capillary. In the result, calculated fluid flow, tissue volume, colloid concentration, and tissue pressure were all converged to suitable value for the set condition within 60 minutes at a maximum. Also, because these results were not conflict with prior knowledge, it is certain that the model can enough represent physiological state of brain under such limited conditions at least. One of next challenges is to integrate this model and the already constructed cerebral blood circulation model. This modification enable to simulate CBF and ICP more precisely due to calculating the effect of blood pressure change to extracellular fluid migration and that of ICP change to CBF.Keywords: dynamic model, cerebral extracellular migration, brain resuscitation, automatic control
Procedia PDF Downloads 156911 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 330910 Mechanism of Action of New Sustainable Flame Retardant Additives in Polyamide 6,6
Authors: I. Belyamani, M. K. Hassan, J. U. Otaigbe, W. R. Fielding, K. A. Mauritz, J. S. Wiggins, W. L. Jarrett
Abstract:
We have investigated the flame-retardant efficiency of special new phosphate glass (P-glass) compositions having different glass transition temperatures (Tg) on the processing conditions of polyamide 6,6 (PA6,6) and the final hybrid flame retardancy (FR). We have showed that the low Tg P glass composition (i.e., ILT 1) is a promising flame retardant for PA6,6 at a concentration of up to 15 wt. % compared to intermediate (IIT 3) and high (IHT 1) Tg P glasses. Cone calorimetry data showed that the ILT 1 decreased both the peak heat release rate and the total heat amount released from the PA6,6/ILT 1 hybrids, resulting in an efficient formation of a glassy char layer. These intriguing findings prompted to address several questions concerning the mechanism of action of the different P glasses studied. The general mechanism of action of phosphorous based FR additives occurs during the combustion stage by enhancing the morphology of the char and the thermal shielding effect. However, the present work shows that P glass based FR additives act during melt processing of PA6,6/P glass hybrids. Dynamic mechanical analysis (DMA) revealed that the Tg of PA6,6/ILT 1 was significantly shifted to a lower Tg (~65 oC) and another transition appeared at high temperature (~ 166 oC), thus indicating a strong interaction between PA6,6 and ILT 1. This was supported by a drop in the melting point and crystallinity of the PA6,6/ILT 1 hybrid material as detected by differential scanning calorimetry (DSC). The dielectric spectroscopic investigation of the networks’ molecular level structural variations (i.e. hybrids chain motion, Tg and sub-Tg relaxations) agreed very well with the DMA and DSC findings; it was found that the three different P glass compositions did not show any effect on the PA6,6 sub-Tg relaxations (related to the NH2 and OH chain end groups motions). Nevertheless, contrary to IIT 3 and IHT 1 based hybrids, the PA6,6/ILT 1 hybrid material showed an evidence of splitting the PA6,6 Tg relaxations into two peaks. Finally, the CPMAS 31P-NMR data confirmed the miscibility between ILT 1 and PA6,6 at the molecular level, as a much larger enhancement in cross-polarization for the PA6,6/15%ILT 1 hybrids was observed. It can be concluded that compounding low Tg P-glass (i.e. ILT 1) with PA6,6 facilitates hydrolytic chain scission of the PA6,6 macromolecules through a potential chemical interaction between phosphate and the alpha-Carbon of the amide bonds of the PA6,6, leading to better flame retardant properties.Keywords: broadband dielectric spectroscopy, composites, flame retardant, polyamide, phosphate glass, sustainable
Procedia PDF Downloads 238909 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures
Authors: A. T. Al-Isawi, P. E. F. Collins
Abstract:
The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction
Procedia PDF Downloads 122908 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller
Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian
Abstract:
The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature
Procedia PDF Downloads 129