Search results for: cost evaluation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12030

Search results for: cost evaluation

900 Reactors with Effective Mixing as a Solutions for Micro-Biogas Plant

Authors: M. Zielinski, M. Debowski, P. Rusanowska, A. Glowacka-Gil, M. Zielinska, A. Cydzik-Kwiatkowska, J. Kazimierowicz

Abstract:

Technologies for the micro-biogas plant with heating and mixing systems are presented as a part of the Research Coordination for a Low-Cost Biomethane Production at Small and Medium Scale Applications (Record Biomap). The main objective of the Record Biomap project is to build a network of operators and scientific institutions interested in cooperation and the development of promising technologies in the sector of small and medium-sized biogas plants. The activities carried out in the project will bridge the gap between research and market and reduce the time of implementation of new, efficient technological and technical solutions. Reactor with simultaneously mixing and heating system is a concrete tank with a rectangular cross-section. In the reactor, heating is integrated with the mixing of substrate and anaerobic sludge. This reactor is solution dedicated for substrates with high solids content, which cannot be introduced to the reactor with pumps, even with positive displacement pumps. Substrates are poured to the reactor and then with a screw pump, they are mixed with anaerobic sludge. The pumped sludge, flowing through the screw pump, is simultaneously heated by a heat exchanger. The level of the fermentation sludge inside the reactor chamber is above the bottom edge of the cover. Cover of the reactor is equipped with the screw pump driver. Inside the reactor, an electric motor is installed that is driving a screw pump. The heated sludge circulates in the digester. The post-fermented sludge is collected using a drain well. The inlet to the drain well is below the level of the sludge in the digester. The biogas is discharged from the reactor by the biogas intake valve located on the cover. The technology is very useful for fermentation of lignocellulosic biomass and substrates with high content of dry mass (organic wastes). The other technology is a reactor for micro-biogas plant with a pressure mixing system. The reactor has a form of plastic or concrete tank with a circular cross-section. The effective mixing of sludge is ensured by profiled at 90° bottom of the tank. Substrates for fermentation are supplied by an inlet well. The inlet well is equipped with a cover that eliminates odour release. The introduction of a new portion of substrates is preceded by pumping of digestate to the disposal well. Optionally, digestate can gravitationally flow to digestate storage tank. The obtained biogas is discharged into the separator. The valve supplies biogas to the blower. The blower presses the biogas from the fermentation chamber in such a way as to facilitate the introduction of a new portion of substrates. Biogas is discharged from the reactor by valve that enables biogas removal but prevents suction from outside the reactor.

Keywords: biogas, digestion, heating system, mixing system

Procedia PDF Downloads 154
899 The Invisibility of Production: A Comparative Study of the Marker of Modern Urban-Centric Economic Development

Authors: Arpita Banerjee

Abstract:

We now live in a world where half of the human population is city dwellers. The migration of people from rural to urban areas is rising continuously. But, the promise of a greater wage and better quality of life cannot keep up with the pace of migration. The rate of urbanization is much higher in developing countries. The UN predicts that 95 percent of this urban expansion will take place in the developing world in the next few decades. The population in the urban settlements of the developing nations is soaring, and megacities like Mumbai, Dhaka, Jakarta, Karachi, Manila, Shanghai, Rio de Janeiro, Lima, and Kinshasa are crammed with people, a majority of whom are migrants. Rural-urban migration has taken a new shape with the rising number of smaller cities. Apart from the increase in non-agricultural economic activities, growing demand for resources and energy, an increase in wastes and pollution, and a greater ecological footprint, there is another significant characteristic of the current wave of urbanization. This paper analyses that important marker of urbanization. It is the invisibility of production sites. The growing urban space ensures that the producers, the production sites, or the process stay beyond urban visibility. In cities and towns, living is majorly about earning money in either the informal service and small scale manufacturing sectors (a major part of which is food preparation), or the formal service sector. In the cases of both the informal service and small scale manufacturing or the formal service sector, commodity creation cannot be seen. The urban space happens to be the marketplace, where nature and its services, along with the non-urban labour, cannot be seen unless it is sold in the market. Hence, the consumers are now increasingly becoming disengaged from the producers. This paper compares the rate of increase in the size of and employment in the informal sector and/or that of the formal sector of some selected urban areas of India. Also, a comparison over the years of the aforementioned characteristics is presented in this paper, in order to find out how the anonymity of the producers to the urban consumers have grown as urbanization has risen. This paper also analyses the change in the transport cost of goods into the cities and towns of India and supports that claim made here that the invisibility of production is a crucial marker of modern-day urban-centric economic development. Such urbanization has an important ecological impact. The invisibility of the production site saves the urban consumer society from dealing with the ethical and ecological aspects of the production process. Once the real sector production is driven out of the cities and towns, the invisible ethical and ecological impacts of the growing urban consumption frees the consumers from associating themselves with any responsibility towards those impacts.

Keywords: ecological impact of urbanization, informal sector, invisibility of production, urbanization

Procedia PDF Downloads 134
898 Pregnancy Outcome in Women with HIV Infection from a Tertiary Care Centre of India

Authors: Kavita Khoiwal, Vatsla Dadhwal, K. Aparna Sharma, Dipika Deka, Plabani Sarkar

Abstract:

Introduction: About 2.4 million (1.93 - 3.04 million) people are living with HIV/AIDS in India. Of all HIV infections, 39% (9,30,000) are among women. 5.4% of infections are from mother to child transmission (MTCT), 25,000 infected children are born every year. Besides the risk of mother to child transmission of HIV, these women are at risk of the higher adverse pregnancy outcome. The objectives of the study were to compare the obstetric and neonatal outcome in women who are HIV positive with low-risk HIV negative women and effect of antiretroviral drugs on preterm birth and IUGR. Materials and Methods: This is a retrospective case record analysis of 212 HIV-positive women delivering between 2002 to 2015, in a tertiary health care centre which was compared with 238 HIV-negative controls. Women who underwent medical termination of pregnancy and abortion were excluded from the study. Obstetric outcome analyzed were pregnancy induced hypertension, HIV positive intrauterine growth restriction, preterm birth, anemia, gestational diabetes and intrahepatic cholestasis of pregnancy. Neonatal outcome analysed were birth weight, apgar score, NICU admission and perinatal transmission.HIV-positiveOut of 212 women, 204 received antiretroviral therapy (ART) to prevent MTCT, 27 women received single dose nevirapine (sdNVP) or sdNVP tailed with 7 days of zidovudine and lamivudine (ZDV + 3TC), 15 received ZDV, 82 women received duovir and 80 women received triple drug therapy depending upon the time period of presentation. Results: Mean age of 212 HIV positive women was 25.72+3.6 years, 101 women (47.6 %) were primigravida. HIV positive status was diagnosed during pregnancy in 200 women while 12 women were diagnosed prior to conception. Among 212 HIV positive women, 20 (9.4 %) women had preterm delivery (< 37 weeks), 194 women (91.5 %) delivered by cesarean section and 18 women (8.5 %) delivered vaginally. 178 neonates (83.9 %) received exclusive top feeding and 34 neonates (16.03 %) received exclusive breast feeding. When compared to low risk HIV negative women (n=238), HIV positive women were more likely to deliver preterm (OR 1.27), have anemia (OR 1.39) and intrauterine growth restriction (OR 2.07). Incidence of pregnancy induced hypertension, diabetes mellitus and ICP was not increased. Mean birth weight was significantly lower in HIV positive women (2593.60+499 gm) when compared to HIV negative women (2919+459 gm). Complete follow up is available for 148 neonates till date, rest are under evaluation. Out of these 7 neonates found to have HIV positive status. Risk of preterm birth (P value = 0.039) and IUGR (P value = 0.739) was higher in HIV positive women who did not receive any ART during pregnancy than women who received ART. Conclusion: HIV positive pregnant women are at increased risk of adverse pregnancy outcome. Multidisciplinary team approach and use of highly active antiretroviral therapy can optimize the maternal and perinatal outcome.

Keywords: antiretroviral therapy, HIV infection, IUGR, preterm birth

Procedia PDF Downloads 261
897 Determination of Genetic Markers, Microsatellites Type, Liked to Milk Production Traits in Goats

Authors: Mohamed Fawzy Elzarei, Yousef Mohammed Al-Dakheel, Ali Mohamed Alseaf

Abstract:

Modern molecular techniques, like single marker analysis for linked traits to these markers, can provide us with rapid and accurate genetic results. In the last two decades of the last century, the applications of molecular techniques were reached a faraway point in cattle, sheep, and pig. In goats, especially in our region, the application of molecular techniques is still far from other species. As reported by many researchers, microsatellites marker is one of the suitable markers for lie studies. The single marker linked to traits of interest is one technique allowed us to early select animals without the necessity for mapping the entire genome. Simplicity, applicability, and low cost of this technique gave this technique a wide range of applications in many areas of genetics and molecular biology. Also, this technique provides a useful approach for evaluating genetic differentiation, particularly in populations that are poorly known genetically. The expected breeding value (EBV) and yield deviation (YD) are considered as the most parameters used for studying the linkage between quantitative characteristics and molecular markers, since these values are raw data corrected for the non-genetic factors. A total of 17 microsatellites markers (from chromosomes 6, 14, 18, 20 and 23) were used in this study to search for areas that could be responsible for genetic variability for some milk traits and search of chromosomal regions that explain part of the phenotypic variance. Results of single-marker analyses were used to identify the linkage between microsatellite markers and variation in EBVs of these traits, Milk yield, Protein percentage, Fat percentage, Litter size and weight at birth, and litter size and weight at weaning. The estimates of the parameters from forward and backward solutions using stepwise regression procedure on milk yield trait, only two markers, OARCP9 and AGLA29, showed a highly significant effect (p≤0.01) in backward and forward solutions. The forward solution for different equations conducted that R2 of these equations were highly depending on only two partials regressions coefficient (βi,) for these markers. For the milk protein trait, four marker showed significant effect BMS2361, CSSM66 (p≤0.01), BMS2626, and OARCP9 (p≤0.05). By the other way, four markers (MCM147, BM1225, INRA006, andINRA133) showed highly significant effect (p≤0.01) in both backward and forward solutions in association with milk fat trait. For both litter size at birth and at weaning traits, only one marker (BM143(p≤0.01) and RJH1 (p≤0.05), respectively) showed a significant effect in backward and forward solutions. The estimates of the parameters from forward and backward solution using stepwise regression procedure on litter weight at birth (LWB) trait only one marker (MCM147) showed highly significant effect (p≤0.01) and two marker (ILSTS011, CSSM66) showed a significant effect (p≤0.05) in backward and forward solutions.

Keywords: microsatellites marker, estimated breeding value, stepwise regression, milk traits

Procedia PDF Downloads 93
896 Strategies for Public Space Utilization

Authors: Ben Levenger

Abstract:

Social life revolves around a central meeting place or gathering space. It is where the community integrates, earns social skills, and ultimately becomes part of the community. Following this premise, public spaces are one of the most important spaces that downtowns offer, providing locations for people to be witnessed, heard, and most importantly, seamlessly integrate into the downtown as part of the community. To facilitate this, these local spaces must be envisioned and designed to meet the changing needs of a downtown, offering a space and purpose for everyone. This paper will dive deep into analyzing, designing, and implementing public space design for small plazas or gathering spaces. These spaces often require a detailed level of study, followed by a broad stroke of design implementation, allowing for adaptability. This paper will highlight how to assess needs, define needed types of spaces, outline a program for spaces, detail elements of design to meet the needs, assess your new space, and plan for change. This study will provide participants with the necessary framework for conducting a grass-roots-level assessment of public space and programming, including short-term and long-term improvements. Participants will also receive assessment tools, sheets, and visual representation diagrams. Urbanism, for the sake of urbanism, is an exercise in aesthetic beauty. An economic improvement or benefit must be attained to solidify these efforts' purpose further and justify the infrastructure or construction costs. We will deep dive into case studies highlighting economic impacts to ground this work in quantitative impacts. These case studies will highlight the financial impact on an area, measuring the following metrics: rental rates (per sq meter), tax revenue generation (sales and property), foot traffic generation, increased property valuations, currency expenditure by tenure, clustered development improvements, cost/valuation benefits of increased density in housing. The economic impact results will be targeted by community size, measuring in three tiers: Sub 10,000 in population, 10,001 to 75,000 in population, and 75,000+ in population. Through this classification breakdown, the participants can gauge the impact in communities similar to their work or for which they are responsible. Finally, a detailed analysis of specific urbanism enhancements, such as plazas, on-street dining, pedestrian malls, etc., will be discussed. Metrics that document the economic impact of each enhancement will be presented, aiding in the prioritization of improvements for each community. All materials, documents, and information will be available to participants via Google Drive. They are welcome to download the data and use it for their purposes.

Keywords: downtown, economic development, planning, strategic

Procedia PDF Downloads 85
895 Effect of Particle Size Variations on the Tribological Properties of Porcelain Waste Added Epoxy Composites

Authors: B. Yaman, G. Acikbas, N. Calis Acikbas

Abstract:

Epoxy based materials have advantages in tribological applications due to their unique properties such as light weight, self-lubrication capacity and wear resistance. On the other hand, their usage is often limited by their low load bearing capacity and low thermal conductivity values. In this study, it is aimed to improve tribological and also mechanical properties of epoxy by reinforcing with ceramic based porcelain waste. It is well-known that the reuse or recycling of waste materials leads to reduction in production costs, ease of manufacturing, saving energy, etc. From this perspective, epoxy and epoxy matrix composites containing 60wt% porcelain waste with different particle size in the range of below 90µm and 150-250µm were fabricated, and the effect of filler particle size on the mechanical and tribological properties was investigated. The microstructural characterization was carried out by scanning electron microscopy (SEM), and phase analysis was determined by X-ray diffraction (XRD). The Archimedes principle was used to measure the density and porosity of the samples. The hardness values were measured using Shore-D hardness, and bending tests were performed. Microstructural investigations indicated that porcelain particles were homogeneously distributed and no agglomerations were encountered in the epoxy resin. Mechanical test results showed that the hardness and bending strength were increased with increasing particle size related to low porosity content and well embedding to the matrix. Tribological behavior of these composites was evaluated in terms of friction, wear rates and wear mechanisms by ball-on-disk contact with dry and rotational sliding at room temperature against WC ball with a diameter of 3mm. Wear tests were carried out at room temperature (23–25°C) with a humidity of 40 ± 5% under dry-sliding conditions. The contact radius of cycles was set to 5 mm at linear speed of 30 cm/s for the geometry used in this study. In all the experiments, 3N of constant test load was applied at a frequency of 8 Hz and prolonged to 400m wear distance. The friction coefficient of samples was recorded online by the variation in the tangential force. The steady-state CoFs were changed in between 0,29-0,32. The dimensions of the wear tracks (depth and width) were measured as two-dimensional profiles by a stylus profilometer. The wear volumes were calculated by integrating these 2D surface areas over the diameter. Specific wear rates were computed by dividing the wear volume by the applied load and sliding distance. According to the experimental results, the use of porcelain waste in the fabrication of epoxy resin composites can be suggested to be potential materials due to allowing improved mechanical and tribological properties and also providing reduction in production cost.

Keywords: epoxy composites, mechanical properties, porcelain waste, tribological properties

Procedia PDF Downloads 196
894 Synergy Surface Modification for High Performance Li-Rich Cathode

Authors: Aipeng Zhu, Yun Zhang

Abstract:

The growing grievous environment problems together with the exhaustion of energy resources put urgent demands for developing high energy density. Considering the factors including capacity, resource and environment, Manganese-based lithium-rich layer-structured cathode materials xLi₂MnO₃⋅(1-x)LiMO₂ (M = Ni, Co, Mn, and other metals) are drawing increasing attention due to their high reversible capacities, high discharge potentials, and low cost. They are expected to be one type of the most promising cathode materials for the next-generation Li-ion batteries (LIBs) with higher energy densities. Unfortunately, their commercial applications are hindered with crucial drawbacks such as poor rate performance, limited cycle life and continuous falling of the discharge potential. With decades of extensive studies, significant achievements have been obtained in improving their cyclability and rate performances, but they cannot meet the requirement of commercial utilization till now. One major problem for lithium-rich layer-structured cathode materials (LLOs) is the side reaction during cycling, which leads to severe surface degradation. In this process, the metal ions can dissolve in the electrolyte, and the surface phase change can hinder the intercalation/deintercalation of Li ions and resulting in low capacity retention and low working voltage. To optimize the LLOs cathode material, the surface coating is an efficient method. Considering the price and stability, Al₂O₃ was used as a coating material in the research. Meanwhile, due to the low initial Coulombic efficiency (ICE), the pristine LLOs was pretreated by KMnO₄ to increase the ICE. The precursor was prepared by a facile coprecipitation method. The as-prepared precursor was then thoroughly mixed with Li₂CO₃ and calcined in air at 500℃ for 5h and 900℃ for 12h to produce Li₁.₂[Ni₀.₂Mn₀.₆]O₂ (LNMO). The LNMO was then put into 0.1ml/g KMnO₄ solution stirring for 3h. The resultant was filtered and washed with water, and dried in an oven. The LLOs obtained was dispersed in Al(NO₃)₃ solution. The mixture was lyophilized to confer the Al(NO₃)₃ was uniformly coated on LLOs. After lyophilization, the LLOs was calcined at 500℃ for 3h to obtain LNMO@LMO@ALO. The working electrodes were prepared by casting the mixture of active material, acetylene black, and binder (polyvinglidene fluoride) dissolved in N-methyl-2-pyrrolidone with a mass ratio of 80: 15: 5 onto an aluminum foil. The electrochemical performance tests showed that the multiple surface modified materials had a higher initial Coulombic efficiency (84%) and better capacity retention (91% after 100 cycles) compared with that of pristine LNMO (76% and 80%, respectively). The modified material suggests that the KMnO₄ pretreat and Al₂O₃ coating can increase the ICE and cycling stability.

Keywords: Li-rich materials, surface coating, lithium ion batteries, Al₂O₃

Procedia PDF Downloads 133
893 Development of Bilayer Coating System for Mitigating Corrosion of Offshore Wind Turbines

Authors: Adamantini Loukodimou, David Weston, Shiladitya Paul

Abstract:

Offshore structures are subjected to harsh environments. It is documented that carbon steel needs protection from corrosion. The combined effect of UV radiation, seawater splash, and fluctuating temperatures diminish the integrity of these structures. In addition, the possibility of damage caused by floating ice, seaborne debris, and maintenance boats make them even more vulnerable. Their inspection and maintenance when far out in the sea are difficult, risky, and expensive. The most known method of mitigating corrosion of offshore structures is the use of cathodic protection. There are several zones in an offshore wind turbine. In the atmospheric zone, due to the lack of a continuous electrolyte (seawater) layer between the structure and the anode at all times, this method proves inefficient. Thus, the use of protective coatings becomes indispensable. This research focuses on the atmospheric zone. The conversion of commercially available and conventional paint (epoxy) system to an autonomous self-healing paint system via the addition of suitable encapsulated healing agents and catalyst is investigated in this work. These coating systems, which can self-heal when damaged, can provide a cost-effective engineering solution to corrosion and related problems. When the damage of the paint coating occurs, the microcapsules are designed to rupture and release the self-healing liquid (monomer), which then will react in the presence of the catalyst and solidify (polymerization), resulting in healing. The catalyst should be compatible with the system because otherwise, the self-healing process will not occur. The carbon steel substrate will be exposed to a corrosive environment, so the use of a sacrificial layer of Zn is also investigated. More specifically, the first layer of this new coating system will be TSZA (Thermally Sprayed Zn85/Al15) and will be applied on carbon steel samples with dimensions 100 x 150 mm after being blasted with alumina (size F24) as part of the surface preparation. Based on the literature, it corrodes readily, so one additional paint layer enriched with microcapsules will be added. Also, the reaction and the curing time are of high importance in order for this bilayer system of coating to work successfully. For the first experiments, polystyrene microcapsules loaded with 3-octanoyltio-1-propyltriethoxysilane were conducted. Electrochemical experiments such as Electrochemical Impedance Spectroscopy (EIS) confirmed the corrosion inhibiting properties of the silane. The diameter of the microcapsules was about 150-200 microns. Further experiments were conducted with different reagents and methods in order to obtain diameters of about 50 microns, and their self-healing properties were tested in synthetic seawater using electrochemical techniques. The use of combined paint/electrodeposited coatings allows for further novel development of composite coating systems. The potential for the application of these coatings in offshore structures will be discussed.

Keywords: corrosion mitigation, microcapsules, offshore wind turbines, self-healing

Procedia PDF Downloads 115
892 Mechanical Properties and Antibiotic Release Characteristics of Poly(methyl methacrylate)-based Bone Cement Formulated with Mesoporous Silica Nanoparticles

Authors: Kumaran Letchmanan, Shou-Cang Shen, Wai Kiong Ng

Abstract:

Postoperative implant-associated infections in soft tissues and bones remain a serious complication in orthopaedic surgery, which leads to impaired healing, re-implantation, prolong hospital stay and increase cost. Drug-loaded implants with sustained release of antibiotics at the local site are current research interest to reduce the risk of post-operative infections and osteomyelitis, thus, minimize the need for follow-up care and increase patient comfort. However, the improved drug release of the drug-loaded bone cements is usually accompanied by a loss in mechanical strength, which is critical for weight-bearing bone cement. Recently, more attempts have been undertaken to develop techniques to enhance the antibiotic elution as well as preserve the mechanical properties of the bone cements. The present study investigates the potential influence of addition of mesoporous silica nanoparticles (MSN) on the in vitro drug release kinetics of gentamicin (GTMC), along with the mechanical properties of bone cements. Simplex P was formulated with MSN and loaded with GTMC by direct impregnation. Meanwhile, Simplex P with water soluble poragen (xylitol) and high loading of GTMC as well as commercial bone cement CMW Smartset GHV were used as controls. MSN-formulated bone cements are able to increase the drug release of GTMC by 3-fold with a cumulative release of more than 46% as compared with other control groups. Furthermore, a sustained release could be achieved for two months. The loaded nano-sized MSN with uniform pore channels significantly build up an effective nano-network path in the bone cement facilitates the diffusion and extended release of GTMC. Compared with formulations using xylitol and high GTMC loading, incorporation of MSN shows no detrimental effect on biomechanical properties of the bone cements as no significant changes in the mechanical properties as compared with original bone cement. After drug release for two months, the bending modulus of MSN-formulated bone cements is 4.49 ± 0.75 GPa and the compression strength is 92.7 ± 2.1 MPa (similar to the compression strength of Simplex-P: 93.0 ± 1.2 MPa). The unaffected mechanical properties of MSN-formulated bone cements was due to the unchanged microstructures of bone cement, whereby more than 98% of MSN remains in the matrix and supports the bone cement structures. In contrast, the large portions of extra voids can be observed for the formulations using xylitol and high drug loading after the drug release study, thus caused compressive strength below the ASTM F541 and ISO 5833 minimum of 70 MPa. These results demonstrate the potential applicability of MSN-functionalized poly(methyl methacrylate)-based bone cement as a highly efficient, sustained and local drug delivery system with good mechanical properties.

Keywords: antibiotics, biomechanical properties, bone cement, sustained release

Procedia PDF Downloads 257
891 Inherent Difficulties in Countering Islamophobia

Authors: Imbesat Daudi

Abstract:

Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.

Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam

Procedia PDF Downloads 48
890 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea

Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park

Abstract:

Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.

Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques

Procedia PDF Downloads 169
889 Disaster Capitalism, Charter Schools, and the Reproduction of Inequality in Poor, Disabled Students: An Ethnographic Case Study

Authors: Sylvia Mac

Abstract:

This ethnographic case study examines disaster capitalism, neoliberal market-based school reforms, and disability through the lens of Disability Studies in Education. More specifically, it explores neoliberalism and special education at a small, urban charter school in a large city in California and the (re)production of social inequality. The study uses Sociology of Special Education to examine the ways in which special education is used to sort and stratify disabled students. At a time when rhetoric surrounding public schools is framed in catastrophic and dismal language in order to justify the privatization of public education, small urban charter schools must be examined to learn if they are living up to their promise or acting as another way to maintain economic and racial segregation. The study concludes that neoliberal contexts threaten successful inclusive education and normalize poor, disabled students’ continued low achievement and poor post-secondary outcomes. This ethnographic case study took place at a small urban charter school in a large city in California. Participants included three special education students, the special education teacher, the special education assistant, a regular education teacher, and the two founders and charter writers. The school claimed to have a push-in model of special education where all special education students were fully included in the general education classroom. Although presented as fully inclusive, some special education students also attended a pull-out class called Study Skills. The study found that inclusion and neoliberalism are differing ideologies that cannot co-exist. Successful inclusive environments cannot thrive while under the influences of neoliberal education policies such as efficiency and cost-cutting. Additionally, the push for students to join the global knowledge economy means that more and more low attainers are further marginalized and kept in poverty. At this school, neoliberal ideology eclipsed the promise of inclusive education for special education students. This case study has shown the need for inclusive education to be interrogated through lenses that consider macro factors, such as neoliberal ideology in public education, as well as the emerging global knowledge economy and increasing income inequality. Barriers to inclusion inside the school, such as teachers’ attitudes, teacher preparedness, and school infrastructure paint only part of the picture. Inclusive education is also threatened by neoliberal ideology that shifts the responsibility from the state to the individual. This ideology is dangerous because it reifies the stereotypes of disabled students as lazy, needs drains on already dwindling budgets. If these stereotypes persist, inclusive education will have a difficult time succeeding. In order to more fully examine the ways in which inclusive education can become truly emancipatory, we need more analysis on the relationship between neoliberalism, disability, and special education.

Keywords: case study, disaster capitalism, inclusive education, neoliberalism

Procedia PDF Downloads 223
888 Bio-Medical Equipment Technicians: Crucial Workforce to Improve Quality of Health Services in Rural Remote Hospitals in Nepal

Authors: C. M. Sapkota, B. P. Sapkota

Abstract:

Background: Continuous developments in science and technology are increasing the availability of thousands of medical devices – all of which should be of good quality and used appropriately to address global health challenges. It is obvious that bio medical devices are becoming ever more indispensable in health service delivery and among the key workforce responsible for their design, development, regulation, evaluation and training in their use: biomedical technician (BMET) is the crucial. As a pivotal member of health workforce, biomedical technicians are an essential component of the quality health service delivery mechanism supporting the attainment of the Sustainable Development Goals. Methods: The study was based on cross sectional descriptive design. Indicators measuring the quality of health services were assessed in Mechi Zonal Hospital (MZH) and Sagarmatha Zonal Hospital (SZH). Indicators were calculated based on the data about hospital utilization and performance of 2018 available in Medical record section of both hospitals. MZH had employed the BMET during 2018 but SZH had no BMET in 2018.Focus Group Discussion with health workers in both hospitals was conducted to validate the hospital records. Client exit interview was conducted to assess the level of client satisfaction in both the hospitals. Results: In MZH there was round the clock availability and utilization of Radio diagnostics equipment, Laboratory equipment. Operation Theater was functional throughout the year. Bed Occupancy rate in MZH was 97% but in SZH it was only 63%.In SZH, OT was functional only 54% of the days in 2018. CT scan machine was just installed but not functional. Computerized X-Ray in SZH was functional only in 72% of the days. Level of client satisfaction was 87% in MZH but was just 43% in SZH. MZH performed all (256) the Caesarean Sections but SZH performed only 36% of 210 Caesarean Sections in 2018. In annual performance ranking of Government Hospitals, MZH was placed in 1st rank while as SZH was placed in 19th rank out of 32 referral hospitals nationwide in 2018. Conclusion: Biomedical technicians are the crucial member of the human resource for health team with the pivotal role. Trained and qualified BMET professionals are required within health-care systems in order to design, evaluate, regulate, acquire, maintain, manage and train on safe medical technologies. Applying knowledge of engineering and technology to health-care systems to ensure availability, affordability, accessibility, acceptability and utilization of the safer, higher quality, effective, appropriate and socially acceptable bio medical technology to populations for preventive, promotive, curative, rehabilitative and palliative care across all levels of the health service delivery.

Keywords: biomedical equipment technicians, BMET, human resources for health, HRH, quality health service, rural hospitals

Procedia PDF Downloads 127
887 Recirculation Type Photocatalytic Reactor for Degradation of Monocrotophos Using TiO₂ and W-TiO₂ Coated Immobilized Clay Beads

Authors: Abhishek Sraw, Amit Sobti, Yamini Pandey, R. K. Wanchoo, Amrit Pal Toor

Abstract:

Monocrotophos (MCP) is a widely used pesticide in India, which belong to an extremely toxic organophosphorus family, is persistent in nature and its toxicity is widely reported in all environmental segments in the country. Advanced Oxidation Process (AOP) is a promising solution to the problem of water pollution. TiO₂ is being widely used as a photocatalyst because of its many advantages, but it has a large band gap, due to which it is modified using metal and nonmetal dopant to make it active under sunlight and visible light. The use of nanosized powdered catalysts makes the recovery process extremely complicated. Hence the aim is to use low cost, easily available, eco-friendly clay material in form of bead as the support for the immobilization of catalyst, to solve the problem of post-separation of suspended catalyst from treated water. A recirculation type photocatalytic reactor (RTPR), using ultraviolet light emitting source (blue black lamp) was designed which work effectively for both suspended catalysts and catalyst coated clay beads. The bare, TiO₂ and W-TiO₂ coated clay beads were characterized by scanning electron microscopy (SEM), electron dispersive spectroscopy (EDS) and N₂ adsorption–desorption measurements techniques (BET) for their structural, textural and electronic properties. The study involved variation of different parameters like light conditions, recirculation rate, light intensity and initial MCP concentration under UV and sunlight for the degradation of MCP. The degradation and mineralization studies of the insecticide solution were performed using UV-Visible spectrophotometer, and COD vario-photometer and GC-MS analysis respectively. The main focus of the work lies in checking the recyclability of the immobilized TiO₂ over clay beads in the developed RTPR up to 30 continuous cycles without reactivation of catalyst. The results demonstrated the economic feasibility of the utilization of developed RTPR for the efficient purification of pesticide polluted water. The prepared TiO₂ clay beads delivered 75.78% degradation of MCP under UV light with negligible catalyst loss. Application of W-TiO₂ coated clay beads filled RTPR for the degradation of MCP under sunlight, however, shows 32% higher degradation of MCP than the same system based on undoped TiO₂. The COD measurements of TiO₂ coated beads led to 73.75% COD reduction while W-TiO₂ resulted in 87.89% COD reduction. The GC-MS analysis confirms the efficient breakdown of complex MCP molecules into simpler hydrocarbons. This supports the promising application of clay beads as a support for the photocatalyst and proves its eco-friendly nature, excellent recyclability, catalyst holding capacity, and economic viability.

Keywords: immobilized clay beads, monocrotophos, recirculation type photocatalytic reactor, TiO₂

Procedia PDF Downloads 182
886 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 131
885 Large-Scale Production of High-Performance Fiber-Metal-Laminates by Prepreg-Press-Technology

Authors: Christian Lauter, Corin Reuter, Shuang Wu, Thomas Troester

Abstract:

Lightweight construction became more and more important over the last decades in several applications, e.g. in the automotive or aircraft sector. This is the result of economic and ecological constraints on the one hand and increasing safety and comfort requirements on the other hand. In the field of lightweight design, different approaches are used due to specific requirements towards the technical systems. The use of endless carbon fiber reinforced plastics (CFRP) offers the largest weight saving potential of sometimes more than 50% compared to conventional metal-constructions. However, there are very limited industrial applications because of the cost-intensive manufacturing of the fibers and production technologies. Other disadvantages of pure CFRP-structures affect the quality control or the damage resistance. One approach to meet these challenges is hybrid materials. This means CFRP and sheet metal are combined on a material level. Therefore, new opportunities for innovative process routes are realizable. Hybrid lightweight design results in lower costs due to an optimized material utilization and the possibility to integrate the structures in already existing production processes of automobile manufacturers. In recent and current research, the advantages of two-layered hybrid materials have been pointed out, i.e. the possibility to realize structures with tailored mechanical properties or to divide the curing cycle of the epoxy resin into two steps. Current research work at the Chair for Automotive Lightweight Design (LiA) at the Paderborn University focusses on production processes for fiber-metal-laminates. The aim of this work is the development and qualification of a large-scale production process for high-performance fiber-metal-laminates (FML) for industrial applications in the automotive or aircraft sector. Therefore, the prepreg-press-technology is used, in which pre-impregnated carbon fibers and sheet metals are formed and cured in a closed, heated mold. The investigations focus e.g. on the realization of short process chains and cycle times, on the reduction of time-consuming manual process steps, and the reduction of material costs. This paper gives an overview over the considerable steps of the production process in the beginning. Afterwards experimental results are discussed. This part concentrates on the influence of different process parameters on the mechanical properties, the laminate quality and the identification of process limits. Concluding the advantages of this technology compared to conventional FML-production-processes and other lightweight design approaches are carried out.

Keywords: composite material, fiber-metal-laminate, lightweight construction, prepreg-press-technology, large-series production

Procedia PDF Downloads 240
884 A Public Health Perspective on Deradicalisation: Re-Conceptualising Deradicalisation Approaches

Authors: Erin Lawlor

Abstract:

In 2008 Time magazine named terrorist rehabilitation as one of the best ideas of the year. The term deradicalisation has become synonymous with rehabilitation within security discourse. The allure for a “quick fix” when managing terrorist populations (particularly within prisons) has led to a focus on prescriptive programmes where there is a distinct lack of exploration into the drivers for a person to disengage or deradicalise from violence. It has been argued that to tackle a snowballing issue that interventions have moved too quickly for both theory development and methodological structure. This overly quick acceptance of a term that lacks rigorous testing, measuring, and monitoring means that there is distinct lack of evidence base for deradicalisation being a genuine process/phenomenon, leading to academics retrospectively attempting to design frameworks and interventions around a concept that is not truly understood. The UK Home Office has openly acknowledged the lack of empirical data on this subject. This lack of evidence has a direct impact on policy and intervention development. Extremism and deradicalisation are issues that affect public health outcomes on a global scale, to the point that terrorism has now been added to the list of causes of trauma, both in the direct form of being victim of an attack but also the indirect context of witnesses, children and ordinary citizens who live in daily fear. This study critiques current deradicalisation discourses to establish whether public health approaches offer opportunities for development. The research begins by exploring the theoretical constructs of both what deradicalisation, and public health issues are. Questioning: What does deradicalisation involve? Is there an evidential base on which deradicalisation theory has established itself? What theory are public health interventions devised from? What does success look like in both fields? From establishing this base, current deradicalisation practices will then be explored through examples of work already being carried out. Critiques can be broken into discussion points of: Language, the difficulties with conducting empirical studies and the issues around outcome measurements that deradicalisation interventions face. This study argues that a public health approach towards deradicalisation offers the opportunity to attempt to bring clarity to the definitions of radicalisation, identify what could be modified through intervention and offer insights into the evaluation of interventions. As opposed to simply focusing on an element of deradicalisation and analysing that in isolation, a public health approach allows for what the literature has pointed out is missing, a comprehensive analysis of current interventions and information on creating efficacy monitoring systems. Interventions, policies, guidance, and practices in both the UK and Australia will be compared and contrasted, due to the joint nature of this research between Sheffield Hallam University and La Trobe, Melbourne.

Keywords: radicalisation, deradicalisation, violent extremism, public health

Procedia PDF Downloads 67
883 The Gaps of Environmental Criminal Liability in Armed Conflicts and Its Consequences: An Analysis under Stockholm, Geneva and Rome

Authors: Vivian Caroline Koerbel Dombrowski

Abstract:

Armed conflicts have always meant the ultimate expression of power and at the same time, lack of understanding among nations. Cities were destroyed, people were killed, assets were devastated. But these are not only the loss of a war: the environmental damage comes to be considered immeasurable losses in the short, medium and long term. And this is because no nation wants to bear that cost. They invest in military equipment, training, technical equipment but the environmental account yet finds gaps in international law. Considering such a generalization in rights protection, many nations are at imminent danger in a conflict if the water will be used as a mass weapon, especially if we consider important rivers such as Jordan, Euphrates and Nile. The top three international documents were analyzed on the subject: the Stockholm Convention (1972), Additional Protocol I to the Geneva Convention (1977) and the Rome Statute (1998). Indeed, some references are researched in doctrine, especially scientific articles, to substantiate with consistent data about the extent of the damage, historical factors and decisions which have been successful. However, due to the lack of literature about this subject, the research tends to be exhaustive. From the study of the indicated material, it was noted that international law - humanitarian and environmental - calls in some of its instruments the environmental protection in war conflicts, but they are generic and vague rules that do not define exactly what is the environmental damage , nor sets standards for measure them. Taking into account the mains conflicts of the century XX: World War II, the Vietnam War and the Gulf War, one must realize that the environmental consequences were of great rides - never deactivated landmines, buried nuclear weapons, armaments and munitions destroyed in the soil, chemical weapons, not to mention the effects of some weapons when used (uranium, agent Orange, etc). Extending the search for more recent conflicts such as Afghanistan, it is proven that the effects on health of the civilian population were catastrophic: cancer, birth defects, and deformities in newborns. There are few reports of nations that, somehow, repaired the damage caused to the environment as a result of the conflict. In the pitch of contemporary conflicts, many nations fear that water resources are used as weapons of mass destruction, because once contaminated - directly or indirectly - can become a means of disguised genocide side effect of military objective. In conclusion, it appears that the main international treaties governing the subject mention the concern for environmental protection, however leave the normative specifications vacancies necessary to effectively there is a prevention of environmental damage in armed conflict and, should they occur, the repair of the same. Still, it appears that there is no protection mechanism to safeguard natural resources and avoid them to become a mass destruction weapon.

Keywords: armed conflicts, criminal liability, environmental damages, humanitarian law, mass weapon

Procedia PDF Downloads 420
882 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images

Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso

Abstract:

Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.

Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence

Procedia PDF Downloads 20
881 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf

Authors: Abderrazak Bannari, Ghadeer Kadhem

Abstract:

The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.

Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water

Procedia PDF Downloads 381
880 Repair of Thermoplastic Composites for Structural Applications

Authors: Philippe Castaing, Thomas Jollivet

Abstract:

As a result of their advantages, i.e. recyclability, weld-ability, environmental compatibility, long (continuous) fiber thermoplastic composites (LFTPC) are increasingly used in many industrial sectors (mainly automotive and aeronautic) for structural applications. Indeed, in the next ten years, the environmental rules will put the pressure on the use of new structural materials like composites. In aerospace, more than 50% of the damage are due to stress impact and 85% of damage are repaired on the fuselage (fuselage skin panels and around doors). With the arrival of airplanes mainly of composite materials, replacement of sections or panels seems difficult economically speaking and repair becomes essential. The objective of the present study is to propose a solution of repair to prevent the replacement the damaged part in thermoplastic composites in order to recover the initial mechanical properties. The classification of impact damage is not so not easy : talking about low energy impact (less than 35 J) can be totally wrong when high speed or weak thicknesses as well as thermoplastic resins are considered. Crash and perforation with higher energy create important damages and the structures are replaced without repairing, so we just consider here damages due to impacts at low energy that are as follows for laminates : − Transverse cracking; − Delamination; − Fiber rupture. At low energy, the damages are barely visible but can nevertheless reduce significantly the mechanical strength of the part due to resin cracks while few fiber rupture is observed. The patch repair solution remains the standard one but may lead to the rupture of fibers and consequently creates more damages. That is the reason why we investigate the repair of thermoplastic composites impacted at low energy. Indeed, thermoplastic resins are interesting as they absorb impact energy through plastic strain. The methodology is as follows: - impact tests at low energy on thermoplastic composites; - identification of the damage by micrographic observations; - evaluation of the harmfulness of the damage; - repair by reconsolidation according to the extent of the damage ; -validation of the repair by mechanical characterization (compression). In this study, the impacts tests are performed at various levels of energy on thermoplastic composites (PA/C, PEEK/C and PPS/C woven 50/50 and unidirectional) to determine the level of impact energy creating damages in the resin without fiber rupture. We identify the extent of the damage by US inspection and micrographic observations in the plane part thickness. The samples were in addition characterized in compression to evaluate the loss of mechanical properties. Then the strategy of repair consists in reconsolidating the damaged parts by thermoforming, and after reconsolidation the laminates are characterized in compression for validation. To conclude, the study demonstrates the feasibility of the repair for low energy impact on thermoplastic composites as the samples recover their properties. At a first step of the study, the “repair” is made by reconsolidation on a thermoforming press but we could imagine a process in situ to reconsolidate the damaged parts.

Keywords: aerospace, automotive, composites, compression, damages, repair, structural applications, thermoplastic

Procedia PDF Downloads 305
879 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 193
878 Simulating an Interprofessional Hospital Day Shift: A Student Interprofessional (IP) Collaborative Learning Activity

Authors: Fiona Jensen, Barb Goodwin, Nancy Kleiman, Rhonda Usunier

Abstract:

Background: Clinical simulation is now a common component in many health profession curricula in preparation for clinical practice. In the Rady Faculty of Health Sciences (RFHS) college leads in simulation and interprofessional (IP) education, planned an eight hour simulated hospital day shift, where seventy students from six health professions across two campuses, learned with each other in a safe, realistic environment. Learning about interprofessional collaboration, an expected competency for many health professions upon graduation, was a primary focus of the simulation event. Method: Faculty representatives from the Colleges of Nursing, Medicine, Pharmacy and Rehabilitation Sciences (Physical Therapy, Occupation Therapy, Respiratory Therapy) and Pharmacy worked together to plan the IP event in a simulation facility in the College of Nursing. Each college provided a faculty mentor to guide the same profession students. Students were placed in interprofessional teams consisting of a nurse, physician, pharmacist, and then sharing respiratory, occupational, and physical therapists across the team depending on the needs of the patients. Eight patient scenarios were role played by health profession students, who had been provided with their patient’s story shortly before the event. Each team was guided by a facilitator. Results and Outcomes: On the morning of the event, all students gathered in a large group to meet mentors and facilitators and have a brief overview of the six competencies for effective collaboration and the session objectives. The students assuming their same profession roles were provided with their patient’s chart at the beginning of the shift, met with their team, and then completed professional specific assessments. Shortly into the shift, IP team rounds began, facilitated by the team facilitator. During the shift, each patient role-played a spontaneous health incident, which required collaboration between the IP team members for assessment and management. The afternoon concluded with team rounds, a collaborative management plan, and a facilitated de-brief. Conclusions: During the de-brief sessions, students responded to set questions related to the session learning objectives and expressed many positive learning moments. We believe that we have a sustainable simulation IP collaborative learning opportunity, which can be embedded into curricula, and has the capacity to grow to include more health profession faculties and students. Opportunities are being explored in the RFHS at the administrative level, to offer this event more frequently in the academic year to reach more students. In addition, a formally structured event evaluation tool would provide important feedback and inform the qualitative feedback to event organizers and the colleges about the significance of the simulation event to student learning.

Keywords: simulation, collaboration, teams, interprofessional

Procedia PDF Downloads 131
877 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology

Authors: Sanjeev Kumar Appicharla

Abstract:

This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.

Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach

Procedia PDF Downloads 190
876 Determinants of Quality of Life in Patients with Atypical Prarkinsonian Syndromes: 1-Year Follow-Up Study

Authors: Tatjana Pekmezovic, Milica Jecmenica-Lukic, Igor Petrovic, Vladimir Kostic

Abstract:

Background: A group of atypical parkinsonian syndromes (APS) includes a variety of rare neurodegenerative disorders characterized by reduced life expectancy, increasing disability, and considerable impact on health-related quality of life (HRQoL). Aim: In this study we wanted to answer two questions: a) which demographic and clinical factors are main contributors of HRQoL in our cohort of patients with APS, and b) how does quality of life of these patients change over 1-year follow-up period. Patients and Methods: We conducted a prospective cohort study in hospital settings. The initial study comprised all consecutive patients who were referred to the Department of Movement Disorders, Clinic of Neurology, Clinical Centre of Serbia, Faculty of Medicine, University of Belgrade (Serbia), from January 31, 2000 to July 31, 2013, with the initial diagnoses of ‘Parkinson’s disease’, ‘parkinsonism’, ‘atypical parkinsonism’ and ‘parkinsonism plus’ during the first 8 months from the appearance of first symptom(s). The patients were afterwards regularly followed in 4-6 month intervals and eventually the diagnoses were established for 46 patients fulfilling the criteria for clinically probable progressive supranuclear palsy (PSP) and 36 patients for probable multiple system atrophy (MSA). The health-related quality of life was assessed by using the SF-36 questionnaire (Serbian translation). Hierarchical multiple regression analysis was conducted to identify predictors of composite scores of SF-36. The importance of changes in quality of life scores of patients with APS between baseline and follow-up time-point were quantified using Wilcoxon Signed Ranks Test. The magnitude of any differences for the quality of life changes was calculated as an effect size (ES). Results: The final models of hierarchical regression analysis showed that apathy measured by the Apathy evaluation scale (AES) score accounted for 59% of the variance in the Physical Health Composite Score of SF-36 and 14% of the variance in the Mental Health Composite Score of SF-36 (p<0.01). The changes in HRQoL were assessed in 52 patients with APS who completed 1-year follow-up period. The analysis of magnitude for changes in HRQoL during one-year follow-up period have shown sustained medium ES (0.50-0.79) for both Physical and Mental health composite scores, total quality of life as well as for the Physical Health, Vitality, Role Emotional and Social Functioning. Conclusion: This study provides insight into new potential predictors of HRQoL and its changes over time in patients with APS. Additionally, identification of both prognostic markers of a poor HRQoL and magnitude of its changes should be considered when developing comprehensive treatment-related strategies and health care programs aimed at improving HRQoL and well-being in patients with APS.

Keywords: atypical parkinsonian syndromes, follow-up study, quality of life, APS

Procedia PDF Downloads 307
875 Estimating the Efficiency of a Meta-Cognitive Intervention Program to Reduce the Risk Factors of Teenage Drivers with Attention Deficit Hyperactivity Disorder While Driving

Authors: Navah Z. Ratzon, Talia Glick, Iris Manor

Abstract:

Attention Deficit Hyperactivity Disorder (ADHD) is a chronic disorder that affects the sufferer’s functioning throughout life and in various spheres of activity, including driving. Difficulties in cognitive functioning and executive functions are often part and parcel of the ADHD diagnosis, and thus form a risk factor in driving. Studies examining the effectiveness of intervention programs for improving and rehabilitating driving in typical teenagers have been conducted in relatively small numbers; while studies on similar programs for teenagers with ADHD have been especially scarce. The aim of the present study has been to examine the effectiveness of a metacognitive occupational therapy intervention program for reducing risk factors in driving among teenagers with ADHD. The present study included 37 teenagers aged 17 to 19. They included 23 teenagers with ADHD divided into experimental (11) and control (12) groups; as well as 14 non-ADHD teenagers forming a second control group. All teenagers taking part in the study were examined in the Tel Aviv University driving lab, and underwent cognitive diagnoses and a driving simulator test. Every subject in the intervention group took part in 3 assessment meetings, and two metacognitive treatment meetings. The control groups took part in two assessment meetings with a follow-up meeting 3 months later. In all the study’s groups, the treatment’s effectiveness was tested by comparing monitoring results on the driving simulator at the first and second evaluations. In addition, the driving of 5 subjects from the intervention group was monitored continuously from a month prior to the start of the intervention, a month during the phase of the intervention and another month until the end of the intervention. In the ADHD control group, the driving of 4 subjects was monitored from the end of the first evaluation for a period of 3 months. The study’s findings were affected by the fact that the ADHD control group was different from the two other groups, and exhibited ADHD characteristics manifested by impaired executive functions and lower metacognitive abilities relative to their peers. The study found partial, moderate, non-significant correlations between driving skills and cognitive functions, executive functions, and perceptions and attitudes towards driving. According to the driving simulator test results and the limited sampling results of actual driving, it was found that a metacognitive occupational therapy intervention may be effective in reducing risk factors in driving among teenagers with ADHD relative to their peers with and without ADHD. In summary, the results of the present study indicate a positive direction that speaks to the viability of using a metacognitive occupational therapy intervention program for reducing risk factors in driving. A further study is required that will include a bigger number of subjects, add actual driving monitoring hours, and assign subjects randomly to the various groups.

Keywords: ADHD, driving, driving monitoring, metacognitive intervention, occupational therapy, simulator, teenagers

Procedia PDF Downloads 307
874 Awake Fiberoptic Intubation for Airway Management in a Patient with an Ulceroproliferative Mass of the Aryepiglottic Fold Obscuring Glottic Opening

Authors: Dielle Martins

Abstract:

A 45-year-old female, Manju Devi, presented with a 6-month history of progressively changing voice, difficulty breathing for the past month, and worsening dysphagia for the past two weeks, particularly with solids. Direct laryngoscopy revealed an ulceroproliferative mass arising from the left aryepiglottic fold, obscuring the glottic opening. Imaging with contrast-enhanced CT of the neck showed a lobulated, heterogeneous mass in the hypo-pharyngeal region, encroaching into the airway and involving the aryepiglottic fold and pyriform sinus, raising concerns for a malignant lesion. Small reactive lymph nodes were identified in the left submandibular region and along the carotid sheath. Due to the location of the mass near the glottis and the risk of complete airway obstruction, securing the airway was a critical concern. An awake fiberoptic bronchoscopy for endotracheal intubation was chosen as the safest approach. The patient was prepped with local anesthesia to the airway using nebulized 10% lignocaine and 4% lignocaine spray to the oral mucosa. After obtaining informed consent, the patient was positioned supine on the operating table. To facilitate the fiberoptic intubation, the patient’s neck was extended, and the head was laterally rotated 30 degrees to the left. This positioning helped optimize the visualization of the glottic opening, which was obscured by the mass. The fiberoptic scope was carefully passed through the oral cavity, past the uvula, and into the laryngeal area. As the scope advanced, the ulceroproliferative mass was observed covering most of the glottis, with only the anterior commissure visible. After further gentle manipulation, including the use of a shoulder roll for additional neck extension and rotation, a clearer view of the anterior two-thirds of the glottis was achieved. A 6.5mm internal diameter endotracheal tube was advanced over the fiberoptic scope and successfully positioned just above the carina. General anesthesia was then induced, and an excision biopsy of the growth was performed. This case underscores the importance of careful preoperative airway evaluation and the role of awake fiberoptic intubation in managing complex airway obstructions. Proper patient positioning, including neck extension and lateral rotation, proved crucial for successful intubation in the presence of a mass obstructing the glottic opening. This case emphasizes the techniques used in the fiberoptic intubation and the careful positioning of the patient, which were critical for the success of the procedure.

Keywords: awake fiberoptic bronchoscopy in laryngeal growth, Difficult intubation in glottic cancer, glottic cancer, difficult airway

Procedia PDF Downloads 4
873 Hygro-Thermal Modelling of Timber Decks

Authors: Stefania Fortino, Petr Hradil, Timo Avikainen

Abstract:

Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.

Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM

Procedia PDF Downloads 176
872 Developing Offshore Energy Grids in Norway as Capability Platforms

Authors: Vidar Hepsø

Abstract:

The energy and oil companies on the Norwegian Continental shelf come from a situation where each asset control and manage their energy supply (island mode) and move towards a situation where the assets need to collaborate and coordinate energy use with others due to increased cost and scarcity of electric energy sharing the energy that is provided. Currently, several areas are electrified either with an onshore grid cable or are receiving intermittent energy from offshore wind-parks. While the onshore grid in Norway is well regulated, the offshore grid is still in the making, with several oil and gas electrification projects and offshore wind development just started. The paper will describe the shift in the mindset that comes with operating this new offshore grid. This transition process heralds an increase in collaboration across boundaries and integration of energy management across companies, businesses, technical disciplines, and engagement with stakeholders in the larger society. This transition will be described as a function of the new challenges with increased complexity of the energy mix (wind, oil/gas, hydrogen and others) coupled with increased technical and organization complexity in energy management. Organizational complexity denotes an increasing integration across boundaries, whether these boundaries are company, vendors, professional disciplines, regulatory regimes/bodies, businesses, and across numerous societal stakeholders. New practices must be developed, made legitimate and institutionalized across these boundaries. Only parts of this complexity can be mitigated technically, e.g.: by use of batteries, mixing energy systems and simulation/ forecasting tools. Many challenges must be mitigated with legitimated societal and institutionalized governance practices on many levels. Offshore electrification supports Norway’s 2030 climate targets but is also controversial since it is exploiting the larger society’s energy resources. This means that new systems and practices must also be transparent, not only for the industry and the authorities, but must also be acceptable and just for the larger society. The paper report from ongoing work in Norway, participant observation and interviews in projects and people working with offshore grid development in Norway. One case presented is the development of an offshore floating windfarm connected to two offshore installations and the second case is an offshore grid development initiative providing six installations electric energy via an onshore cable. The development of the offshore grid is analyzed using a capability platform framework, that describes the technical, competence, work process and governance capabilities that are under development in Norway. A capability platform is a ‘stack’ with the following layers: intelligent infrastructure, information and collaboration, knowledge sharing & analytics and finally business operations. The need for better collaboration and energy forecasting tools/capabilities in this stack will be given a special attention in the two use cases that are presented.

Keywords: capability platform, electrification, carbon footprint, control rooms, energy forecsting, operational model

Procedia PDF Downloads 68
871 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost

Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku

Abstract:

Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.

Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost

Procedia PDF Downloads 111