Search results for: simulation techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11019

Search results for: simulation techniques

1719 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit

Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek

Abstract:

In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.

Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage

Procedia PDF Downloads 255
1718 Role of Financial Institutions in Promoting Micro Service Enterprises with Special Reference to Hairdressing Salons

Authors: Gururaj Bhajantri

Abstract:

Financial sector is the backbone of any economy and it plays a crucial role in the mobilisation and allocation of resources. One of the main objectives of financial sector is inclusive growth. The constituents of the financial sector are banks, and financial Institutions, which mobilise the resources from the surplus sector and channelize the same to the different needful sectors in the economy. Micro Small and the Medium Enterprises sector in India cover a wide range of economic activities. These enterprises are divided on the basis of investment on equipment. The micro enterprises are divided into manufacturing and services sector. Micro Service enterprises have investment limit up to ten lakhs on equipment. Hairdresser is one who not only cuts and shaves but also provides different types of hair cut, hairstyles, trimming, hair-dye, massage, manicure, pedicure, nail services, colouring, facial, makeup application, waxing, tanning and other beauty treatments etc., hairdressing salons provide these services with the help of equipment. They need investment on equipment not more than ten lakhs. Hence, they can be considered as Micro service enterprises. Hairdressing salons require more than Rs 2.50,000 to start a moderate salon. Moreover, hairdressers are unable to access the organised finance. Still these individuals access finance from money lenders with high rate of interest to lead life. The socio economic conditions of hairdressers are not known properly. Hence, the present study brings a light on the role of financial institutions in promoting hairdressing salons. The study also focuses the socio-economic background of individuals in hairdressings salons, problems faced by them. The present study is based on primary and secondary data. Primary data collected among hairdressing salons in Davangere city. Samples selected with the help of simple random sampling techniques. Collected data analysed and interpreted with the help of simple statistical tools.

Keywords: micro service enterprises, financial institutions, hairdressing salons, financial sector

Procedia PDF Downloads 194
1717 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 113
1716 Instrumental Neutron Activation Analysis (INAA) and Atomic Absorption Spectroscopy (AAS) for the Elemental Analysis Medicinal Plants from India Used in the Treatment of Heart Diseases

Authors: B. M. Pardeshi

Abstract:

Introduction: Minerals and trace elements are chemical elements required by our bodies for numerous biological and physiological processes that are necessary for the maintenance of health. Medicinal plants are highly beneficial for the maintenance of good health and prevention of diseases. They are known as potential sources of minerals and vitamins. 30 to 40% of today’s conventional drugs used in the medicinal and curative properties of various plants are employed in herbal supplement botanicals, nutraceuticals and drug. Aim: The authors explored the mineral element content of some herbs, because mineral elements may have significant role in the development and treatment of gastrointestinal diseases, and a close connection between the presence or absence of mineral elements and inflammatory mediators was noted. Methods: Present study deals with the elemental analysis of medicinal plants by Instrumental Neutron activation Analysis and Atomic Absorption Spectroscopy. Medicinal herbals prescribed for skin diseases were purchased from markets and were analyzed by Instrumental Neutron Activation Analysis (INAA) using 252Cf Californium spontaneous fission neutron source (flux* 109 n s-1) and the induced activities were counted by γ-ray spectrometry and Atomic Absorption Spectroscopy (AAS) techniques (Perkin Elmer 3100 Model) available at Department of Chemistry University of Pune, India, was used for the measurement of major, minor and trace elements. Results: 15 elements viz. Al, K, Cl, Na, Mn by INAA and Cu, Co, Pb Ni, Cr, Ca, Fe, Zn, Hg and Cd by AAS were analyzed from different medicinal plants from India. A critical examination of the data shows that the elements Ca , K, Cl, Al, and Fe are found to be present at major levels in most of the samples while the other elements Na, Mn, Cu, Co, Pb, Ni, Cr, Ca, Zn, Hg and Cd are present in minor or trace levels. Conclusion: The beneficial therapeutic effect of the studied herbs may be related to their mineral element content. The elemental concentration in different medicinal plants is discussed.

Keywords: instrumental neutron activation analysis, atomic absorption spectroscopy, medicinal plants, trace elemental analysis, mineral contents

Procedia PDF Downloads 319
1715 Computational Approach to Cyclin-Dependent Kinase 2 Inhibitors Design and Analysis: Merging Quantitative Structure-Activity Relationship, Absorption, Distribution, Metabolism, Excretion, and Toxicity, Molecular Docking, and Molecular Dynamics Simulations

Authors: Mohamed Moussaoui, Mouna Baassi, Soukayna Baammi, Hatim Soufi, Mohammed Salah, Rachid Daoud, Achraf EL Allali, Mohammed Elalaoui Belghiti, Said Belaaouad

Abstract:

The present study aims to investigate the quantitative structure-activity relationship (QSAR) of a series of Thiazole derivatives reported as anticancer agents (hepatocellular carcinoma), using principally the electronic descriptors calculated by the density functional theory (DFT) method and by applying the multiple linear regression method. The developed model showed good statistical parameters (R²= 0.725, R²ₐ𝒹ⱼ= 0.653, MSE = 0.060, R²ₜₑₛₜ= 0.827, Q²𝒸ᵥ = 0.536). The energy of the highest occupied molecular orbital (EHOMO) orbital, electronic energy (TE), shape coefficient (I), number of rotatable bonds (NROT), and index of refraction (n) were revealed to be the main descriptors influencing the anti-cancer activity. Additional Thiazole derivatives were then designed and their activities and pharmacokinetic properties were predicted using the validated QSAR model. These designed molecules underwent evaluation through molecular docking (MD) and molecular dynamic (MD) simulations, with binding affinity calculated using the MMPBSA script according to a 100 ns simulation trajectory. This process aimed to study both their affinity and stability towards Cyclin-Dependent Kinase 2 (CDK2), a target protein for cancer disease treatment. The research concluded by identifying four CDK2 inhibitors - A1, A3, A5, and A6 - displaying satisfactory pharmacokinetic properties. MDs results indicated that the designed compound A5 remained stable in the active center of the CDK2 protein, suggesting its potential as an effective inhibitor for the treatment of hepatocellular carcinoma. The findings of this study could contribute significantly to the development of effective CDK2 inhibitors.

Keywords: QSAR, ADMET, Thiazole, anticancer, molecular docking, molecular dynamic simulations, MMPBSA calculation

Procedia PDF Downloads 82
1714 Evaluation of κ -Carrageenan Hydrogel Efficiency in Wound-Healing

Authors: Ali Ayatic, Emad Mozaffari, Bahareh Tanhaei, Maryam Khajenoori, Saeedeh Movaghar Khoshkho, Ali Ayati

Abstract:

The abuse of antibiotics, such as tetracycline (TC), is a great global threat to people and the use of topical antibiotics is a promising tact that can help to solve this problem. Antibiotic therapy is often appropriate and necessary for acute wound infections, while topical tetracycline can be highly efficient in improving the wound healing process in diabetics. Due to the advantages of drug-loaded hydrogels as wound dressing, such as ease of handling, high moisture resistance, excellent biocompatibility, and the ability to activate immune cells to speed wound healing, it was found as an ideal wound treatment. In this work, the tetracycline-loaded hydrogels combining agar (AG) and κ-carrageenan (k-CAR) as polymer materials were prepared, in which span60 surfactant was introduced inside as a drug carrier. The Field Emission Scanning Electron Microscopes (FESEM) and Fourier-transform infrared spectroscopy (FTIR) techniques were employed to provide detailed information on the morphology, composition, and structure of fabricated drug-loaded hydrogels and their mechanical properties, and hydrogel permeability to water vapor was investigated as well. Two types of gram-negative and gram-positive bacteria were used to explore the antibacterial properties of prepared tetracycline-contained hydrogels. Their swelling and drug release behavior was studied using the changing factors such as the ratio of polysaccharides (MAG/MCAR), the span60 surfactant concentration, potassium chloride (KCl) concentration and different release media (deionized water (DW), phosphate-buffered saline (PBS), and simulated wound fluid (SWF)) at different times. Finally, the kinetic behavior of hydrogel swelling was studied. Also, the experimental data of TC release to DW, PBS, and SWF using various mathematical models such as Higuchi, Korsmeyer-Peppas, zero-order, and first-order in the linear and nonlinear modes were evaluated.

Keywords: drug release, hydrogel, tetracycline, wound healing

Procedia PDF Downloads 69
1713 Screening of Phytochemicals Compounds from Chasmanthera dependens and Carissa edulis as Potential Inhibitors of Carbonic Anhydrases CA II (3HS4) Receptor using a Target-Based Drug Design

Authors: Owonikoko Abayomi Dele

Abstract:

Epilepsy is an unresolved disease that needs urgent attention. It is a brain disorder that affects over sixty-five (65) million people around the globe. Despite the availability of commercial anti-epileptic drugs, the war against this unmet condition is yet to be resolved. Most epilepsy patients are resistant to available anti-epileptic medications thus the need for affordable novel therapy against epilepsy is a necessity. Numerous phytochemicals have been reported for their potency, efficacy and safety as therapeutic agents against many diseases. This study investigated 99 isolated phytochemicals from Chasmanthera dependens and Carissa edulis against carbonic anhydrase (ii) drug target. The absorption, distribution, metabolism, excretion and toxicity (ADMET) of the isolated compounds were examined using admet SAR-2 web server while Swiss ADME was used to analyze the oral bioavailability, drug-likeness and lead-likeness properties of the selected leads. PASS web server was used to predict the biological activities of selected leads while other important physicochemical properties and interactions of the selected leads with the active site of the target after successful molecular docking simulation with the pyrx virtual screening tool were also examined. The results of these study identified seven lead compounds; C49- alpha-carissanol (-7.6 kcal/mol), C13- Catechin (-7.4 kcal/mol), C45- Salicin (-7.4 kcal/mol), C6- Bisnorargemonine (-7.3 kcal/mol), C36- Pallidine (-7.1 kcal/mol), S4- Lacosamide (-7.1 kcal/mol), and S7- Acetazolamide (-6.4 kcal/mol) for CA II (3HS4 receptor). These leads compounds are probable inhibitors of this drug target due to the observed good binding affinities and favourable interactions with the active site of the drug target, excellent ADMET profiles, PASS Properties, drug-likeness, lead-likeness and oral bioavailability properties. The identified leads have better binding energies as compared to the binding energies of the two standards. Thus, seven identified lead compounds can be developed further towards the development of new anti-epileptic medications.

Keywords: drug-likeness, phytochemicals, carbonic anhydrases, metalloeazymes, active site, ADMET

Procedia PDF Downloads 28
1712 Outcomes of Pain Management for Patients in Srinagarind Hospital: Acute Pain Indicator

Authors: Chalermsri Sorasit, Siriporn Mongkhonthawornchai, Darawan Augsornwan, Sudthanom Kamollirt

Abstract:

Background: Although knowledge of pain and pain management is improving, they are still inadequate to patients. The Nursing Division of Srinagarind Hospital is responsible for setting the pain management system, including work instruction development and pain management indicators. We have developed an information technology program for monitoring pain quality indicators, which was implemented to all nursing departments in April 2013. Objective: To study outcomes of acute pain management in process and outcome indicators. Method: This is a retrospective descriptive study. The sample population was patients who had acute pain 24-48 hours after receiving a procedure, while admitted to Srinagarind Hospital in 2014. Data were collected from the information technology program. 2709 patients with acute pain from 10 Nursing Departments were recruited in the study. The research tools in this study were 1) the demographic questionnaire 2) the pain management questionnaire for process indicators, and 3) the pain management questionnaire for outcome indicators. Data were analyzed and presented by percentages and means. Results: The process indicators show that nurses used pain assessment tool and recorded 99.19%. The pain reassessment after the intervention was 96.09%. The 80.15% of the patients received opioid for pain medication and the most frequency of non-pharmacological intervention used was positioning (76.72%). For the outcome indicators, nearly half of them (49.90%) had moderate–severe pain, mean scores of worst pain was 6.48 and overall pain was 4.08. Patient satisfaction level with pain management was good (49.17%) and very good (46.62%). Conclusion: Nurses used pain assessment tools and pain documents which met the goal of the pain management process. Patient satisfaction with pain management was at high level. However the patients had still moderate to severe pain. Nurses should adhere more strictly to the guidelines of pain management, by using acute pain guidelines especially when pain intensity is particularly moderate-high. Nurses should also develop and practice a non-pharmacological pain management program to continually improve the quality of pain management. The information technology program should have more details about non-pharmacological pain techniques.

Keywords: outcome, pain management, acute pain, Srinagarind Hospital

Procedia PDF Downloads 217
1711 Investigation of Produced and Ground Water Contamination of Al Wahat Area South-Eastern Part of Sirt Basin, Libya

Authors: Khalifa Abdunaser, Salem Eljawashi

Abstract:

Study area is threatened by numerous petroleum activities. The most important risk is associated with dramatic dangers of misuse and oil and gas pollutions, such as significant volumes of produced water, which refers to waste water generated during the production of oil and natural gas and disposed on the surface surrounded oil and gas fields. This work concerns the impact of oil exploration and production activities on the physical and environment fate of the area, focusing on the investigation and observation of crude oil migration as toxic fluid. Its penetration in groundwater resulted from the produced water impacted by oilfield operations disposed to the earth surface in Al Wahat area. Describing the areal distribution of the dominant groundwater quality constituents has been conducted to identify the major hydro-geochemical processes that affect the quality of water and to evaluate the relations between rock types and groundwater flow to the quality and geochemistry of water in Post-Eocene aquifer. The chemical and physical characteristics of produced water, where it is produced, and its potential impacts on the environment and on oil and gas operations have been discussed. Field work survey was conducted to identify and locate a large number of monitoring wells previously drilled throughout the study area. Groundwater samples were systematically collected in order to detect the fate of spills resulting from the various activities at the oil fields in the study area. Spatial distribution maps of the water quality parameters were built using Kriging methods of interpolation in ArcMap software. Thematic maps were generated using GIS and remote sensing techniques, which were applied to include all these data layers as an active database for the area for the purpose of identifying hot spots and prioritizing locations based on their environmental conditions as well as for monitoring plans.

Keywords: Sirt Basin, produced water, Al Wahat area, Ground water

Procedia PDF Downloads 130
1710 An Efficient Emitting Supramolecular Material Derived from Calixarene: Synthesis, Optical and Electrochemical Features

Authors: Serkan Sayin, Songul F. Varol

Abstract:

High attention on the organic light-emitting diodes has been paid since their efficient properties in the flat panel displays, and solid-state lighting was realized. Because of their high efficient electroluminescence, brightness and providing eminent in the emission range, organic light emitting diodes have been preferred a material compared with the other materials consisting of the liquid crystal. Calixarenes obtained from the reaction of p-tert-butyl phenol and formaldehyde in a suitable base have been potentially used in various research area such as catalysis, enzyme immobilization, and applications, ion carrier, sensors, nanoscience, etc. In addition, their tremendous frameworks, as well as their easily functionalization, make them an effective candidate in the applied chemistry. Herein, a calix[4]arene derivative has been synthesized, and its structure has been fully characterized using Fourier Transform Infrared Spectrophotometer (FTIR), proton nuclear magnetic resonance (¹H-NMR), carbon-13 nuclear magnetic resonance (¹³C-NMR), liquid chromatography-mass spectrometry (LC-MS), and elemental analysis techniques. The calixarene derivative has been employed as an emitting layer in the fabrication of the organic light-emitting diodes. The optical and electrochemical features of calixarane-contained organic light-emitting diodes (Clx-OLED) have been also performed. The results showed that Clx-OLED exhibited blue emission and high external quantum efficacy. As a conclusion obtained results attributed that the synthesized calixarane derivative is a promising chromophore with efficient fluorescent quantum yield that provides it an attractive candidate for fabricating effective materials for fluorescent probes and labeling studies. This study was financially supported by the Scientific and Technological Research Council of Turkey (TUBITAK Grant no. 117Z402).

Keywords: calixarene, OLED, supramolecular chemistry, synthesis

Procedia PDF Downloads 240
1709 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms

Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano

Abstract:

In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.

Keywords: heuristic, MIP model, remedial course, school, timetabling

Procedia PDF Downloads 590
1708 Wireless Gyroscopes for Highly Dynamic Objects

Authors: Dmitry Lukyanov, Sergey Shevchenko, Alexander Kukaev

Abstract:

Modern MEMS gyroscopes have strengthened their position in motion control systems and have led to the creation of tactical grade sensors (better than 15 deg/h). This was achieved by virtue of the success in micro- and nanotechnology development, cooperation among international experts and the experience gained in the mass production of MEMS gyros. This production is knowledge-intensive, often unique and, therefore, difficult to develop, especially due to the use of 3D-technology. The latter is usually associated with manufacturing of inertial masses and their elastic suspension, which determines the vibration and shock resistance of gyros. Today, consumers developing highly dynamic objects or objects working under extreme conditions require the gyro shock resistance of up to 65 000 g and the measurement range of more than 10 000 deg/s. Such characteristics can be achieved by solid-state gyroscopes (SSG) without inertial masses or elastic suspensions, which, for example, can be constructed with molecular kinetics of bulk or surface acoustic waves (SAW). Excellent effectiveness of this sensors production and a high level of structural integration provides basis for increased accuracy, size reduction and significant drop in total production costs. Existing principles of SAW-based sensors are based on the theory of SAW propagation in rotating coordinate systems. A short introduction to the theory of a gyroscopic (Coriolis) effect in SAW is provided in the report. Nowadays more and more applications require passive and wireless sensors. SAW-based gyros provide an opportunity to create one. Several design concepts incorporating reflective delay lines were proposed in recent years, but faced some criticism. Still, the concept is promising and is being of interest in St. Petersburg Electrotechnical University. Several experimental models were developed and tested to find the minimal configuration of a passive and wireless SAW-based gyro. Structural schemes, potential characteristics and known limitations are stated in the report. Special attention is dedicated to a novel method of a FEM modeling with piezoelectric and gyroscopic effects simultaneously taken into account.

Keywords: FEM simulation, gyroscope, OOFELIE, surface acoustic wave, wireless sensing

Procedia PDF Downloads 355
1707 Effects of Active Muscle Contraction in a Car Occupant in Whiplash Injury

Authors: Nisha Nandlal Sharma, Julaluk Carmai, Saiprasit Koetniyom, Bernd Markert

Abstract:

Whiplash Injuries are usually associated with car accidents. The sudden forward or backward jerk to head causes neck strain, which is the result of damage to the muscle or tendons. Neck pain and headaches are the two most common symptoms of whiplash. Symptoms of whiplash are commonly reported in studies but the Injury mechanism is poorly understood. Neck muscles are the most important factor to study the neck Injury. This study focuses on the development of finite element (FE) model of human neck muscle to study the whiplash injury mechanism and effect of active muscle contraction on occupant kinematics. A detailed study of Injury mechanism will promote development and evaluation of new safety systems in cars, hence reducing the occurrence of severe injuries to the occupant. In present study, an active human finite element (FE) model with 3D neck muscle model is developed. Neck muscle was modeled with a combination of solid tetrahedral elements and 1D beam elements. Muscle active properties were represented by beam elements whereas, passive properties by solid tetrahedral elements. To generate muscular force according to inputted activation levels, Hill-type muscle model was applied to beam elements. To simulate non-linear passive properties of muscle, solid elements were modeled with rubber/foam material model. Material properties were assigned from published experimental tests. Some important muscles were then inserted into THUMS (Total Human Model for Safety) 50th percentile male pedestrian model. To reduce the simulation time required, THUMS lower body parts were not included. Posterior to muscle insertion, THUMS was given a boundary conditions similar to experimental tests. The model was exposed to 4g and 7g rear impacts as these load impacts are close to low speed impacts causing whiplash. The effect of muscle activation level on occupant kinematics during whiplash was analyzed.

Keywords: finite element model, muscle activation, neck muscle, whiplash injury prevention

Procedia PDF Downloads 344
1706 Cost-Effective and Optimal Control Analysis for Mitigation Strategy to Chocolate Spot Disease of Faba Bean

Authors: Haileyesus Tessema Alemneh, Abiyu Enyew Molla, Oluwole Daniel Makinde

Abstract:

Introduction: Faba bean is one of the most important grown plants worldwide for humans and animals. Several biotic and abiotic elements have limited the output of faba beans, irrespective of their diverse significance. Many faba bean pathogens have been reported so far, of which the most important yield-limiting disease is chocolate spot disease (Botrytis fabae). The dynamics of disease transmission and decision-making processes for intervention programs for disease control are now better understood through the use of mathematical modeling. Currently, a lot of mathematical modeling researchers are interested in plant disease modeling. Objective: In this paper, a deterministic mathematical model for chocolate spot disease (CSD) on faba bean plant with an optimal control model was developed and analyzed to examine the best strategy for controlling CSD. Methodology: Three control interventions, quarantine (u2), chemical control (u3), and prevention (u1), are employed that would establish the optimal control model. The optimality system, characterization of controls, the adjoint variables, and the Hamiltonian are all generated employing Pontryagin’s maximum principle. A cost-effective approach is chosen from a set of possible integrated strategies using the incremental cost-effectiveness ratio (ICER). The forward-backward sweep iterative approach is used to run numerical simulations. Results: The Hamiltonian, the optimality system, the characterization of the controls, and the adjoint variables were established. The numerical results demonstrate that each integrated strategy can reduce the diseases within the specified period. However, due to limited resources, an integrated strategy of prevention and uprooting was found to be the best cost-effective strategy to combat CSD. Conclusion: Therefore, attention should be given to the integrated cost-effective and environmentally eco-friendly strategy by stakeholders and policymakers to control CSD and disseminate the integrated intervention to the farmers in order to fight the spread of CSD in the Faba bean population and produce the expected yield from the field.

Keywords: CSD, optimal control theory, Pontryagin’s maximum principle, numerical simulation, cost-effectiveness analysis

Procedia PDF Downloads 63
1705 Assessment of Land Suitability for Tea Cultivation Using Geoinformatics in the Mansehra and Abbottabad District, Pakistan

Authors: Nasir Ashraf, Sajid Rahid Ahmad, Adeel Ahmad

Abstract:

Pakistan is a major tea consumer country and ranked as the third largest importer of tea worldwide. Out of all beverage consumed in Pakistan, tea is the one with most demand for which tea import is inevitable. Being an agrarian country, Pakistan should cultivate its own tea and save the millions of dollars cost from tea import. So the need is to identify the most suitable areas with favorable weather condition and suitable soils where tea can be planted. This research is conducted over District Mansehra and District Abbottabad in Khyber Pakhtoonkhwah Province of Pakistan where the most favorable conditions for tea cultivation already exist and National Tea Research Institute has done successful experiments to cultivate high quality tea. High tech approach is adopted to meet the objectives of this research by using the remotely sensed data i.e. Aster DEM, Landsat8 Imagery. The Remote Sensing data was processed in Erdas Imagine, Envi and further analyzed in ESRI ArcGIS spatial analyst for final results and representation of result data in map layouts. Integration of remote sensing data with GIS provided the perfect suitability analysis. The results showed that out of all study area, 13.4% area is highly suitable while 33.44% area is suitable for tea plantation. The result of this research is an impressive GIS based outcome and structured format of data for the agriculture planners and Tea growers. Identification of suitable tea growing areas by using remotely sensed data and GIS techniques is a pressing need for the country. Analysis of this research lets the planners to address variety of action plans in an economical and scientific manner which can lead tea production in Pakistan to meet demand. This geomatics based model and approach may be used to identify more areas for tea cultivation to meet our demand which we can reduce by planting our own tea, and our country can be independent in tea production.

Keywords: agrarian country, GIS, geoinformatics, suitability analysis, remote sensing

Procedia PDF Downloads 376
1704 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 85
1703 Carbon-Based Electrodes for Parabens Detection

Authors: Aniela Pop, Ianina Birsan, Corina Orha, Rodica Pode, Florica Manea

Abstract:

Carbon nanofiber-epoxy composite electrode has been investigated through voltammetric and amperometric techniques in order to detect parabens from aqueous solutions. The occurrence into environment as emerging pollutants of these preservative compounds has been extensively studied in the last decades, and consequently, a rapid and reliable method for their quantitative quantification is required. In this study, methylparaben (MP) and propylparaben (PP) were chosen as representatives for paraben class. The individual electrochemical detection of each paraben has been successfully performed. Their electrochemical oxidation occurred at the same potential value. Their simultaneous quantification should be assessed electrochemically only as general index of paraben class as a cumulative signal corresponding to both MP and PP from solution. The influence of pH on the electrochemical signal was studied. pH ranged between 1.3 and 9.0 allowed shifting the detection potential value to smaller value, which is very desired for the electroanalysis. Also, the signal is better-defined and higher sensitivity is achieved. Differential-pulsed voltammetry and square-wave voltammetry were exploited under the optimum pH conditions to improve the electroanalytical performance for the paraben detection. Also, the operation conditions were selected, i.e., the step potential, modulation amplitude and the frequency. Chronomaprometry application as the easiest electrochemical detection method led to worse sensitivity, probably due to a possible fouling effect of the electrode surface. The best electroanalytical performance was achieved by pulsed voltammetric technique but the selection of the electrochemical technique is related to the concrete practical application. A good reproducibility of the voltammetric-based method using carbon nanofiber-epoxy composite electrode was determined and no interference effect was found for the cation and anion species that are common in the water matrix. Besides these characteristics, the long life-time of the electrode give to carbon nanofiber-epoxy composite electrode a great potential for practical applications.

Keywords: carbon nanofiber-epoxy composite electrode, electroanalysis, methylparaben, propylparaben

Procedia PDF Downloads 212
1702 Numerical Investigation of Indoor Environmental Quality in a Room Heated with Impinging Jet Ventilation

Authors: Mathias Cehlin, Arman Ameen, Ulf Larsson, Taghi Karimipanah

Abstract:

The indoor environmental quality (IEQ) is increasingly recognized as a significant factor influencing the overall level of building occupants’ health, comfort and productivity. An air-conditioning and ventilation system is normally used to create and maintain good thermal comfort and indoor air quality. Providing occupant thermal comfort and well-being with minimized use of energy is the main purpose of heating, ventilating and air conditioning system. Among different types of ventilation systems, the most widely known and used ventilation systems are mixing ventilation (MV) and displacement ventilation (DV). Impinging jet ventilation (IJV) is a promising ventilation strategy developed in the beginning of 2000s. IJV has the advantage of supplying air downwards close to the floor with high momentum and thereby delivering fresh air further out in the room compare to DV. Operating in cooling mode, IJV systems can have higher ventilation effectiveness and heat removal effectiveness compared to MV, and therefore a higher energy efficiency. However, how is the performance of IJV when operating in heating mode? This paper presents the function of IJV in a typical office room for winter conditions (heating mode). In this paper, a validated CFD model, which uses the v2-f model is used for the prediction of air flow pattern, thermal comfort and air change effectiveness. The office room under consideration has the dimensions 4.2×3.6×2.5m, which can be designed like a single-person or two-person office. A number of important factors influencing in the room with IJV are studied. The considered parameters are: heating demand, number of occupants and supplied air conditions. A total of 6 simulation cases are carried out to investigate the effects of the considered parameters. Heat load in the room is contributed by occupants, computer and lighting. The model consists of one external wall including a window. The interaction effects of heat sources, supply air flow and down draught from the window result in a complex flow phenomenon. Preliminary results indicate that IJV can be used for heating of a typical office room. The IEQ seems to be suitable in the occupied region for the studied cases.

Keywords: computation fluid dynamics, impinging jet ventilation, indoor environmental quality, ventilation strategy

Procedia PDF Downloads 165
1701 Comparison of the Postoperative Analgesic Effects of Morphine, Paracetamol, and Ketorolac in Patient-Controlled Analgesia in the Patients Undergoing Open Cholecystectomy

Authors: Siamak Yaghoubi, Vahideh Rashtchi, Marzieh Khezri, Hamid Kayalha, Monadi Hamidfar

Abstract:

Background and objectives: Effective postoperative pain management in abdominal surgeries, which are painful procedures, plays an important role in reducing postoperative complications and increasing patient’s satisfaction. There are many techniques for pain control, one of which is Patient-Controlled Analgesia (PCA). The aim of this study was to compare the analgesic effects of morphine, paracetamol and ketorolac in the patients undergoing open cholecystectomy, using PCA method. Material and Methods: This randomized controlled trial was performed on 330 ASA (American Society of Anesthesiology) I-II patients ( three equal groups, n=110) who were scheduled for elective open cholecystectomy in Shahid Rjaee hospital of Qazvin, Iran from August 2013 until September 2015. All patients were managed by general anesthesia with TIVA (Total Intra Venous Anesthesia) technique. The control group received morphine with maximum dose of 0.02mg/kg/h, the paracetamol group received paracetamol with maximum dose of 1mg/kg/h, and the ketorolac group received ketorolac with maximum daily dose of 60mg using IV-PCA method. The parameters of pain, nausea, hemodynamic variables (BP and HR), pruritus, arterial oxygen desaturation, patient’s satisfaction and pain score were measured every two hours for 8 hours following operation in all groups. Results: There were no significant differences in demographic data between the three groups. there was a statistically significant difference with regard to the mean pain score at all times between morphine and paracetamol, morphine and ketorolac, and paracetamol and ketorolac groups (P<0.001). Results indicated a reduction with time in the mean level of postoperative pain in all three groups. At all times the mean level of pain in ketorolac group was less than that in the other two groups (p<0.001). Conclusion: According to the results of this study ketorolac is more effective than morphine and paracetamol in postoperative pain control in the patients undergoing open cholecystectomy, using PCA method.

Keywords: analgesia, cholecystectomy, ketorolac, morphine, paracetamol

Procedia PDF Downloads 184
1700 Integrations of Students' Learning Achievements and Their Analytical Thinking Abilities with the Problem-Based Learning and the Concept Mapping Instructional Methods on Gene and Chromosome Issue at the 12th Grade Level

Authors: Waraporn Thaimit, Yuwadee Insamran, Natchanok Jansawang

Abstract:

Focusing on Analytical Thinking and Learning Achievement are the critical component of visual thinking that gives one the ability to solve problems quickly and effectively that allows to complex problems into components, and the result had been achieved or acquired form of the subject students of which resulted in changes within the individual as a result of activity in learning. The aims of this study are to administer on comparisons between students’ analytical thinking abilities and their learning achievements sample size consisted of 80 students who sat at the 12th grade level in 2 classes from Chaturaphak Phiman Ratchadaphisek School, the 40-student experimental group with the Problem-Based Learning (PBL) and 40-student controlling group with the Concept Mapping Instructional (CMI) methods were designed. Research instruments composed with the 5-lesson instructional plans to be assessed with the pretest and posttest techniques on each instructional method. Students’ responses of their analytical thinking abilities were assessed with the Analytical Thinking Tests and students’ learning achievements were tested of the Learning Achievement Tests. Statistically significant differences with the paired t-test and F-test (Two-way MANCOVA) between post- and pre-tests of the whole students in two chemistry classes were found. Associations between student learning outcomes in each instructional method and their analytical thinking abilities to their learning achievements also were found (ρ < .05). The use of two instructional methods for this study is revealed that the students perceive their abilities to be highly learning achievement in chemistry classes with the PBL group ought to higher than the CMI group. Suggestions that analytical thinking ability involves the process of gathering relevant information and identifying key issues related to the learning achievement information.

Keywords: comparisons, students learning achievements, analytical thinking abilities, the problem-based learning method, the concept mapping instructional method, gene and chromosome issue, chemistry classes

Procedia PDF Downloads 250
1699 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 108
1698 Relocation of Livestocks in Rural of Canakkale Province Using Remote Sensing and GIS

Authors: Melis Inalpulat, Tugce Civelek, Unal Kizil, Levent Genc

Abstract:

Livestock production is one of the most important components of rural economy. Due to the urban expansion, rural areas close to expanding cities transform into urban districts during the time. However, the legislations have some restrictions related to livestock farming in such administrative units since they tend to create environmental concerns like odor problems resulted from excessive manure production. Therefore, the existing animal operations should be moved from the settlement areas. This paper was focused on determination of suitable lands for livestock production in Canakkale province of Turkey using remote sensing (RS) data and GIS techniques. To achieve the goal, Formosat 2 and Landsat 8 imageries, Aster DEM, and 1:25000 scaled soil maps, village boundaries, and village livestock inventory records were used. The study was conducted using suitability analysis which evaluates the land in terms of limitations and potentials, and suitability range was categorized as Suitable (S) and Non-Suitable (NS). Limitations included the distances from main and crossroads, water resources and settlements, while potentials were appropriate values for slope, land use capability and land use land cover status. Village-based S land distribution results were presented, and compared with livestock inventories. Results showed that approximately 44230 ha area is inappropriate because of the distance limitations for roads and etc. (NS). Moreover, according to LULC map, 71052 ha area consists of forests, olive and other orchards, and thus, may not be suitable for building such structures (NS). In comparison, it was found that there are a total of 1228 ha S lands within study area. The village-based findings indicated that, in some villages livestock production continues on NS areas. Finally, it was suggested that organized livestock zones may be constructed to serve in more than one village after the detailed analysis complemented considering also political decisions, opinion of the local people, etc.

Keywords: GIS, livestock, LULC, remote sensing, suitable lands

Procedia PDF Downloads 276
1697 Phylogenetic Analysis Based On the Internal Transcribed Spacer-2 (ITS2) Sequences of Diadegma semiclausum (Hymenoptera: Ichneumonidae) Populations Reveals Significant Adaptive Evolution

Authors: Ebraheem Al-Jouri, Youssef Abu-Ahmad, Ramasamy Srinivasan

Abstract:

The parasitoid, Diadegma semiclausum (Hymenoptera: Ichneumonidae) is one of the most effective exotic parasitoids of diamondback moth (DBM), Plutella xylostella in the lowland areas of Homs, Syria. Molecular evolution studies are useful tools to shed light on the molecular bases of insect geographical spread and adaptation to new hosts and environment and for designing better control strategies. In this study, molecular evolution analysis was performed based on the 42 nuclear internal transcribed spacer-2 (ITS2) sequences representing the D. semiclausum and eight other Diadegma spp. from Syria and worldwide. Possible recombination events were identified by RDP4 program. Four potential recombinants of the American D. insulare and D. fenestrale (Jeju) were detected. After detecting and removing recombinant sequences, the ratio of non-synonymous (dN) to synonymous (dS) substitutions per site (dN/dS=ɷ) has been used to identify codon positions involved in adaptive processes. Bayesian techniques were applied to detect selective pressures at a codon level by using five different approaches including: fixed effects likelihood (FEL), internal fixed effects likelihood (IFEL), random effects method (REL), mixed effects model of evolution (MEME) and Program analysis of maximum liklehood (PAML). Among the 40 positively selected amino acids (aa) that differed significantly between clades of Diadegma species, three aa under positive selection were only identified in D. semiclausum. Additionally, all D. semiclausum branches tree were highly found under episodic diversifying selection (EDS) at p≤0.05. Our study provide evidence that both recombination and positive selection have contributed to the molecular diversity of Diadegma spp. and highlights the significant contribution of D. semiclausum in adaptive evolution and influence the fitness in the DBM parasitoid.

Keywords: diadegma sp, DBM, ITS2, phylogeny, recombination, dN/dS, evolution, positive selection

Procedia PDF Downloads 406
1696 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 107
1695 Grassroots Innovation for Greening Bangladesh's Urban Slums: The Role of Local Agencies

Authors: Razia Sultana

Abstract:

The chapter investigates the roles of local Non-Governmental Organisations (NGOs) and Community Based Organisations (CBOs) in climate change adaptation through grassroots innovation in urban slums in Dhaka, Bangladesh. The section highlights green infrastructure as an innovative process to mitigate the challenges emanating from climate change at the bottom of the pyramid. The research draws on semi-structured in-depth interviews with 11 NGOs and 2 CBOs working in various slums in Dhaka. The study explores the activities of local agencies relating to urban green infrastructure (UGI) and its possible mitigation of a range of climate change impacts: thermal discomfort, heat stress, flooding and the urban heat island. The main argument of the chapter is unlike the Global North stakeholders’ activities relating to UGI in cities of the Global South have not been expanded on a large scale. Moreover, UGI as a risk management strategy is underutilised in the developing countries. The study finds that, in the context of Bangladesh, climate change adaptation through green infrastructure in cities is still nascent for local NGOs and CBOs. Mostly their activities are limited to addressing the basic needs of slum communities such as water and sanitation. Hence urban slum dwellers have been one of the most vulnerable groups in that they are deprived of the city’s basic ecological services. NGOs are utilizing UGI in an innovative way despite various problems in slums. For instance, land scarcity and land insecurity in slums are two key areas where UGI faces resistance. There are limited instances of NGOs using local and indigenous techniques to encourage slum dwellers to adopt UGI for creating sustainable environments. It is in this context that the paper is an attempt to showcase some of the grassroots innovation that NGOs are currently adopting in slums. Also, some challenges and opportunities are discussed to address UGI as a strategy for climate change adaptation in slums.

Keywords: climate change adaptation, green infrastructure, Dhaka, slums, NGOs

Procedia PDF Downloads 144
1694 Method for Auto-Calibrate Projector and Color-Depth Systems for Spatial Augmented Reality Applications

Authors: R. Estrada, A. Henriquez, R. Becerra, C. Laguna

Abstract:

Spatial Augmented Reality is a variation of Augmented Reality where the Head-Mounted Display is not required. This variation of Augmented Reality is useful in cases where the need for a Head-Mounted Display itself is a limitation. To achieve this, Spatial Augmented Reality techniques substitute the technological elements of Augmented Reality; the virtual world is projected onto a physical surface. To create an interactive spatial augmented experience, the application must be aware of the spatial relations that exist between its core elements. In this case, the core elements are referred to as a projection system and an input system, and the process to achieve this spatial awareness is called system calibration. The Spatial Augmented Reality system is considered calibrated if the projected virtual world scale is similar to the real-world scale, meaning that a virtual object will maintain its perceived dimensions when projected to the real world. Also, the input system is calibrated if the application knows the relative position of a point in the projection plane and the RGB-depth sensor origin point. Any kind of projection technology can be used, light-based projectors, close-range projectors, and screens, as long as it complies with the defined constraints; the method was tested on different configurations. The proposed procedure does not rely on a physical marker, minimizing the human intervention on the process. The tests are made using a Kinect V2 as an input sensor and several projection devices. In order to test the method, the constraints defined were applied to a variety of physical configurations; once the method was executed, some variables were obtained to measure the method performance. It was demonstrated that the method obtained can solve different arrangements, giving the user a wide range of setup possibilities.

Keywords: color depth sensor, human computer interface, interactive surface, spatial augmented reality

Procedia PDF Downloads 113
1693 Data and Model-based Metamodels for Prediction of Performance of Extended Hollo-Bolt Connections

Authors: M. Cabrera, W. Tizani, J. Ninic, F. Wang

Abstract:

Open section beam to concrete-filled tubular column structures has been increasingly utilized in construction over the past few decades due to their enhanced structural performance, as well as economic and architectural advantages. However, the use of this configuration in construction is limited due to the difficulties in connecting the structural members as there is no access to the inner part of the tube to install standard bolts. Blind-bolted systems are a relatively new approach to overcome this limitation as they only require access to one side of the tubular section to tighten the bolt. The performance of these connections in concrete-filled steel tubular sections remains uncharacterized due to the complex interactions between concrete, bolt, and steel section. Over the last years, research in structural performance has moved to a more sophisticated and efficient approach consisting of machine learning algorithms to generate metamodels. This method reduces the need for developing complex, and computationally expensive finite element models, optimizing the search for desirable design variables. Metamodels generated by a data fusion approach use numerical and experimental results by combining multiple models to capture the dependency between the simulation design variables and connection performance, learning the relations between different design parameters and predicting a given output. Fully characterizing this connection will transform high-rise and multistorey construction by means of the introduction of design guidance for moment-resisting blind-bolted connections, which is currently unavailable. This paper presents a review of the steps taken to develop metamodels generated by means of artificial neural network algorithms which predict the connection stress and stiffness based on the design parameters when using Extended Hollo-Bolt blind bolts. It also provides consideration of the failure modes and mechanisms that contribute to the deformability as well as the feasibility of achieving blind-bolted rigid connections when using the blind fastener.

Keywords: blind-bolted connections, concrete-filled tubular structures, finite element analysis, metamodeling

Procedia PDF Downloads 146
1692 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator

Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty

Abstract:

Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.

Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state

Procedia PDF Downloads 244
1691 Rapid and Cheap Test for Detection of Streptococcus pyogenes and Streptococcus pneumoniae with Antibiotic Resistance Identification

Authors: Marta Skwarecka, Patrycja Bloch, Rafal Walkusz, Oliwia Urbanowicz, Grzegorz Zielinski, Sabina Zoledowska, Dawid Nidzworski

Abstract:

Upper respiratory tract infections are one of the most common reasons for visiting a general doctor. Streptococci are the most common bacterial etiological factors in these infections. There are many different types of Streptococci and infections vary in severity from mild throat infections to pneumonia. For example, S. pyogenes mainly contributes to acute pharyngitis, palatine tonsils and scarlet fever, whereas S. Streptococcus pneumoniae is responsible for several invasive diseases like sepsis, meningitis or pneumonia with high mortality and dangerous complications. There are only a few diagnostic tests designed for detection Streptococci from the infected throat of patients. However, they are mostly based on lateral flow techniques, and they are not used as a standard due to their low sensitivity. The diagnostic standard is to culture patients throat swab on semi selective media in order to multiply pure etiological agent of infection and subsequently to perform antibiogram, which takes several days from the patients visit in the clinic. Therefore, the aim of our studies is to develop and implement to the market a Point of Care device for the rapid identification of Streptococcus pyogenes and Streptococcus pneumoniae with simultaneous identification of antibiotic resistance genes. In the course of our research, we successfully selected genes for to-species identification of Streptococci and genes encoding antibiotic resistance proteins. We have developed a reaction to amplify these genes, which allows detecting the presence of S. pyogenes or S. pneumoniae followed by testing their resistance to erythromycin, chloramphenicol and tetracycline. What is more, the detection of β-lactamase-encoding genes that could protect Streptococci against antibiotics from the ampicillin group, which are widely used in the treatment of this type of infection is also developed. The test is carried out directly from the patients' swab, and the results are available after 20 to 30 minutes after sample subjection, which could be performed during the medical visit.

Keywords: antibiotic resistance, Streptococci, respiratory infections, diagnostic test

Procedia PDF Downloads 115
1690 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome

Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco

Abstract:

Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.

Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index

Procedia PDF Downloads 119