Search results for: modal characteristics
1346 Incident Management System: An Essential Tool for Oil Spill Response
Authors: Ali Heyder Alatas, D. Xin, L. Nai Ming
Abstract:
An oil spill emergency can vary in size and complexity, subject to factors such as volume and characteristics of spilled oil, incident location, impacted sensitivities and resources required. A major incident typically involves numerous stakeholders; these include the responsible party, response organisations, government authorities across multiple jurisdictions, local communities, and a spectrum of technical experts. An incident management team will encounter numerous challenges. Factors such as limited access to location, adverse weather, poor communication, and lack of pre-identified resources can impede a response; delays caused by an inefficient response can exacerbate impacts caused to the wider environment, socio-economic and cultural resources. It is essential that all parties work based on defined roles, responsibilities and authority, and ensure the availability of sufficient resources. To promote steadfast coordination and overcome the challenges highlighted, an Incident Management System (IMS) offers an essential tool for oil spill response. It provides clarity in command and control, improves communication and coordination, facilitates the cooperation between stakeholders, and integrates resources committed. Following the preceding discussion, a comprehensive review of existing literature serves to illustrate the application of IMS in oil spill response to overcome common challenges faced in a major-scaled incident. With a primary audience comprising practitioners in mind, this study will discuss key principles of incident management which enables an effective response, along with pitfalls and challenges, particularly, the tension between government and industry; case studies will be used to frame learning and issues consolidated from previous research, and provide the context to link practice with theory. It will also feature the industry approach to incident management which was further crystallized as part of a review by the Joint Industry Project (JIP) established in the wake of the Macondo well control incident. The authors posit that a common IMS which can be adopted across the industry not only enhances response capacity towards a major oil spill incident but is essential to the global preparedness effort.Keywords: command and control, incident management system, oil spill response, response organisation
Procedia PDF Downloads 1561345 Studies on the Effect of Bio-Methanated Distillery Spentwash on Soil Properties and Crop Yields
Authors: S. K. Gali
Abstract:
Spentwash, An effluent of distillery is an environmental pollutant because of its high load of pollutants (pH: 2-4; BOD>40,000 mg/l, COD>100,000mg/l and TDS >70,000mg/l). But However, after subjecting it to primary treatment (bio-methanation), Its pollutant load gets drastically reduced (pH: 7.5-8.5, BOD<10,000 mg/l) and could be disposed off safely as a source of organic matter and plant nutrients for crop production. With the consent of State Pollution Control Board, the distilleries in Karnataka are taking up ‘one time controlled land application’ of bio-methanated spentwash in farmers’ fields. A monitoring study was undertaken in Belgaum district of Karnataka State with an objective of studying the effect of land application of bio-methanated spent wash of a distillery on soil properties and crop growth. The treated spentwash was applied uniformly to the fallow dry lands in different farmers’ fields during summer, 2012 at recommended rate (based on nitrogen requirement of crops). The application was made at least a fortnight before sowing/planting operations. The analysis of soils collected before land application of spentwash and after harvest of crops revealed that there was no adverse effect of applied spentwash on soil characteristics. A slight build up in soluble salts was observed but, however all the soils recorded EC of less than 2.0 dSm-1. An increase in soil organic carbon (SOC) and available nitrogen (N) by about 10 to 30 % was observed in the spentwash applied soils. The presence of good amount of biodegradable organics in the treated spentwash (BOD of 6550 mg/l) contributed for increase in SOC and N. A substantial build up in available potassium (K) status (50 to 200%) was observed due to spentwash application. This was attributed to the high K content in spentwash (6950 mg/l). The growth of crops in the spentwash applied fields was higher and farmers could get nearly 10 to 20 per cent higher yields, especially in sugarcane and corn. The analysis of ground water samples showed that the quality of water was not affected due to land application of treated spentwash. Apart from realizing higher crop yields, the farmers were able to save money on N and K fertilisers as the applied spentwash met the crop requirement. Hence, it could be concluded that the bio-methanated distillery spentwash can be gainfully utilized in crop production without polluting the environment.Keywords: bio-methanation, pollutant, potassium status, soil organic carbon
Procedia PDF Downloads 3921344 Evaluation of Surface Water and Groundwater Quality in Parts of Umunneochi Southeast, Nigeria
Authors: Joshua Chima Chizoba, Wisdom Izuchukwu Uzoma, Elizabeth Ifeyiwa Okoyeh
Abstract:
Water cannot be optimally used and sustained unless the quality is periodically assessed. The study area Umunneochi and environs are located in south eastern part of Nigeria. It stretches geographically from latitudes 50501N to 60000N and longitudes 70201E to 70301. The major geologic formations in the area include the Asu River group, Nkporo Shale, and Ajali Sandstone. The aim of this study is to evaluate the hydrochemical characteristics of surface and ground water sources in parts of Umunneochi and environs in order to establish portability of the water sources for drinking, domestic and irrigation purposes. A total of 15 samples were collected randomly from streams, springs and wells. The samples were analyzed for physicochemical parameters and heavy metals using handheld digital kits, photometer, titration method and Atomic Absorption Spectrophotometer (AAS) following acceptable standards. The obtained analytical data were interpreted, and results were compared with World Health Organization (WHO) standard. The concentration of pH, SO42-and Cl- range from 5.81 mg/l – 6.07 mg/l, 41.93 mg/l – 142.95 mg/l and 20.00 mg/l – 111 mg/l respectively, while Pb and Zn revealed a relative low mean concentration of 0.14 mg/l and 0.40 mg/l, which are all within (WHO) permissible limits except pH. About 27% of the samples are moderately hard. This is attributed to the mining activities in the areas. The abundance of cations and anions in the area are in the order of K+>Na+>Mg2+>Ca2+ and SO4->Cl->HCO3->NO3-, respectively. Chloride, bicarbonate, and nitrate are all within the permissible limits. 13.33% of the total samples contain Sulphate above the standard permissible limits. The values of calculated Water Quality Index (WQI) are less than 50 indicating excellent water. The predominant water-type in the study area is Na-Cl water type and mixed Ca-Mg-Cl water type based on the sample plots on the Piper diagram. The Sodium Absorption Ratio (SAR) calculations showed excellent water for consumption and also good water for irrigation purpose with low sodium and alkalinity ratio respectively. Government water projects are recommended in the area for sustainable domestic and agricultural water supply to ease the stress of water supply problems.Keywords: groundwater, hydrochemical, physichochemical, water-type, sodium adsorption ratio
Procedia PDF Downloads 1301343 The Effect of Bihemisferic Transcranial Direct Current Stimulation Therapy on Upper Extremity Motor Functions in Stroke Patients
Authors: Dilek Cetin Alisar, Oya Umit Yemisci, Selin Ozen, Seyhan Sozay
Abstract:
New approaches and treatment modalities are being developed to make patients more functional and independent in stroke rehabilitation. One of these approaches is transcranial direct stimulation therapy (tDCS), which aims to improve the hemiplegic upper limb function of stroke patients. tDCS therapy is not in the routine rehabilitation program; however, the studies about tDCS therapy on stroke rehabilitation was increased in recent years. Evaluate the effect of tDCS treatment on upper extremity motor function in patients with subacute stroke was aimed in our study. 32 stroke patients (16 tDCS group, 16 sham groups) who were hospitalized for rehabilitation in Başkent University Physical Medicine and Rehabilitation Clinic between 01.08.2016-20.01-2018 were included in the study. The conventional upper limb rehabilitation program was used for both tDCS and control group patients for 3 weeks, 5 days a week, for 60-120 minutes a day. In addition to the conventional stroke rehabilitation program in the tDAS group, bihemispheric tDCS was administered for 30 minutes daily. Patients were evaluated before treatment and after 1 week of treatment. Functional independence measure self-care score (FIM), Brunnstorm Recovery Stage (BRS), and Fugl-Meyer (FM) upper extremity motor function scale were used. There was no difference in demographic characteristics between the groups. There were no significant differences between BRS and FM scores in two groups, but there was a significant difference FIM score (p=0.05. FIM, BRS, and FM scores are significantly in the tDCS group, when before therapy and after 1 week of therapy, however, no difference is found in the shame group (p < 0,001). When FBS and FM scores were compared, there were statistical significant differences in tDCS group (p < 0,001). In conclusion, this randomized double-blind study showed that bihemispheric tDCS treatment was found to be superior to upper extremity motor and functional enhancement in addition to conventional rehabilitation methods in subacute stroke patients. In order for tDCS therapy to be used routinely in stroke rehabilitation, there is a need for more comprehensive, long-termed, randomized controlled clinical trials in order to find answers to many questions, such as the duration and intensity of treatment.Keywords: cortical stimulation, motor function, rehabilitation, stroke
Procedia PDF Downloads 1271342 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana
Authors: Ayesha Sanjana Kawser Parsha
Abstract:
S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score
Procedia PDF Downloads 771341 Association of the Frequency of the Dairy Products Consumption by Students and Health Parameters
Authors: Radyah Ivan, Khanferyan Roman
Abstract:
Milk and dairy products are an important component of a balanced diet. Dairy products represent a heterogeneous food group of solid, semi-solid and liquid, fermented or non-fermented foods, each differing in nutrients such as fat and micronutrient content. Deficiency of milk and dairy products contributes a impact on the main health parameters of the various age groups of the population. The goal of this study was to analyze of the frequency of the consumption of milk and various groups of dairy products by students and its association with their body mass index (BMI), body composition and other physiological parameters. 388 full-time students of the Medical Institute of RUDN University (185 male and 203 female, average age was 20.4+2.2 and 21.9+1.7 y.o., respectively) took part in the cross-sectional study. Anthropometric measurements, estimation of BMI and body composition were analyzed by bioelectrical impedance analysis. The frequency of consumption of the milk and various groups of dairy products was studied using a modified questionnaire on the frequency of consumption of products. Due to the questionnaire data on the frequency of consumption of the diary products, it have been demonstrated that only 11% of respondents consume milk daily, 5% - cottage cheese, 4% and 1% - fermented natural and with fillers milk products, respectively, hard cheese -4%. The study demonstrated that about 16% of the respondents did not consume milk at all over the past month, about one third - cottage cheese, 22% - natural sour-milk products and 18% - sour-milk products with various fillers. hard cheeses and pickled cheeses didn’t consume 9% and 26% of respondents, respectively. We demonstrated the gender differences in the characteristics of consumer preferences were revealed. Thus female students are less likely to use cream, sour cream, soft cheese, milk comparing to male students. Among female students the prevalence of persons with overweight was higher (25%) than among male students (19%). A modest inverse relationship was demonstrated between daily milk intake, BMI, body composition parameters and diary products consumption (r=-0.61 and r=-0.65). The study showed daily insufficient milk and dairy products consumption by students and due to this it have been demonstrated the relationship between the low and rare consumption of diary products and main parameters of indicators of physical activity and health indicators.Keywords: frequency of consumption, milk, dairy products, physical development, nutrition, body mass index.
Procedia PDF Downloads 361340 Microbial Pathogens Associated with Banded Sugar Ants (Camponotus consobrinus) in Calabar, Nigeria
Authors: Ofonime Ogba, Augustine Akpan
Abstract:
Objectives and Goals: The study was aimed at determining pathogenic microbial carriage on the external body parts of Camponotus consobrinus which is also known as the banded sugar ant because of its liking for sugar and sweet food. The level of pathogenic microbial carriage of Camponotus consobrinus in association to the environment in which they have been collected is not known. Methods: The ants were purposively collected from four locations including the kitchens, bedroom of various homes, food shops, and bakeries. The sample collection took place within the hours of 6:30 pm to 11:00 pm. The ants were trapped in transparent plastic containers of which sugar, pineapple peels, sugar cane and soft drinks were used as bait. The ants were removed with a sterile spatula and put in 10mls of peptone water in sterile universal bottles. The containers were vigorously shaken to wash the external surface of the ant. It was left overnight and transported to the Microbiology Laboratory, University of Calabar Teaching Hospital for analysis. The overnight peptone broths were inoculated on Chocolate agar, Blood agar, Cystine Lactose Electrolyte-Deficient agar (CLED) and Sabouraud dextrose agar. Incubation was done aerobically and in a carbon dioxide jar for 24 to 48 hours at 37°C. Isolates were identified based on colonial characteristics, Gram staining, and biochemical tests. Results: Out of the 250 Camponotus consobrinus caught for the study, 90(36.0%) were caught in the kitchen, 75(30.0%) in the bedrooms 40(16.0%) in the bakery while 45(18.0%) were caught in the shops. A total of 82.0% prevalence of different microbial isolates was associated with the ants. The kitchen had the highest number of isolates 75(36.6%) followed by the bedroom 55(26.8%) while the bakery recorded the lowest number of isolates 35(17.1%). The profile of micro-organisms associated with Camponotus consobrinus was Escherichia coli 73(30.0%), Morganella morganii 45(18.0%), Candida species 25(10.0%), Serratia marcescens 10(4.0%) and Citrobacter freundii 10(4.0%). Conclusion: Most of the Camponotus consobrinus examined in the four locations harboured potential pathogens. The presence of ants in homes and shops can facilitate the propagation and spread of pathogenic microorganisms. Therefore, the development of basic preventive measures and the control of ants must be taken seriously.Keywords: Camponotus consobrinus, potential pathogens, microbial isolates, spread
Procedia PDF Downloads 1671339 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 1831338 LTE Modelling of a DC Arc Ignition on Cold Electrodes
Authors: O. Ojeda Mena, Y. Cressault, P. Teulet, J. P. Gonnet, D. F. N. Santos, MD. Cunha, M. S. Benilov
Abstract:
The assumption of plasma in local thermal equilibrium (LTE) is commonly used to perform electric arc simulations for industrial applications. This assumption allows to model the arc using a set of magneto-hydromagnetic equations that can be solved with a computational fluid dynamic code. However, the LTE description is only valid in the arc column, whereas in the regions close to the electrodes the plasma deviates from the LTE state. The importance of these near-electrode regions is non-trivial since they define the energy and current transfer between the arc and the electrodes. Therefore, any accurate modelling of the arc must include a good description of the arc-electrode phenomena. Due to the modelling complexity and computational cost of solving the near-electrode layers, a simplified description of the arc-electrode interaction was developed in a previous work to study a steady high-pressure arc discharge, where the near-electrode regions are introduced at the interface between arc and electrode as boundary conditions. The present work proposes a similar approach to simulate the arc ignition in a free-burning arc configuration following an LTE description of the plasma. To obtain the transient evolution of the arc characteristics, appropriate boundary conditions for both the near-cathode and the near-anode regions are used based on recent publications. The arc-cathode interaction is modeled using a non-linear surface heating approach considering the secondary electron emission. On the other hand, the interaction between the arc and the anode is taken into account by means of the heating voltage approach. From the numerical modelling, three main stages can be identified during the arc ignition. Initially, a glow discharge is observed, where the cold non-thermionic cathode is uniformly heated at its surface and the near-cathode voltage drop is in the order of a few hundred volts. Next, a spot with high temperature is formed at the cathode tip followed by a sudden decrease of the near-cathode voltage drop, marking the glow-to-arc discharge transition. During this stage, the LTE plasma also presents an important increase of the temperature in the region adjacent to the hot spot. Finally, the near-cathode voltage drop stabilizes at a few volts and both the electrode and plasma temperatures reach the steady solution. The results after some seconds are similar to those presented for thermionic cathodes.Keywords: arc-electrode interaction, thermal plasmas, electric arc simulation, cold electrodes
Procedia PDF Downloads 1221337 Feasibility Study for Implementation of Geothermal Energy Technology as a Means of Thermal Energy Supply for Medium Size Community Building
Authors: Sreto Boljevic
Abstract:
Heating systems based on geothermal energy sources are becoming increasingly popular among commercial/community buildings as management of these buildings looks for a more efficient and environmentally friendly way to manage the heating system. The thermal energy supply of most European commercial/community buildings at present is provided mainly by energy extracted from natural gas. In order to reduce greenhouse gas emissions and achieve climate change targets set by the EU, restructuring in the area of thermal energy supply is essential. At present, heating and cooling account for approx... 50% of the EU primary energy supply. Due to its physical characteristics, thermal energy cannot be distributed or exchange over long distances, contrary to electricity and gas energy carriers. Compared to electricity and the gas sectors, heating remains a generally black box, with large unknowns to a researcher and policymaker. Ain literature number of documents address policies for promoting renewable energy technology to facilitate heating for residential/community/commercial buildings and assess the balance between heat supply and heat savings. Ground source heat pump (GSHP) technology has been an extremely attractive alternative to traditional electric and fossil fuel space heating equipment used to supply thermal energy for residential/community/commercial buildings. The main purpose of this paper is to create an algorithm using an analytical approach that could enable a feasibility study regarding the implementation of GSHP technology in community building with existing fossil-fueled heating systems. The main results obtained by the algorithm will enable building management and GSHP system designers to define the optimal size of the system regarding technical, environmental, and economic impacts of the system implementation, including payback period time. In addition, an algorithm is created to be utilized for a feasibility study for many different types of buildings. The algorithm is tested on a building that was built in 1930 and is used as a church located in Cork city. The heating of the building is currently provided by a 105kW gas boiler.Keywords: GSHP, greenhouse gas emission, low-enthalpy, renewable energy
Procedia PDF Downloads 2201336 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach
Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar
Abstract:
The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group
Procedia PDF Downloads 1161335 Predicting Mass-School-Shootings: Relevance of the FBI’s ‘Threat Assessment Perspective’ Two Decades Later
Authors: Frazer G. Thompson
Abstract:
The 1990s in America ended with a mass-school-shooting (at least four killed by gunfire excluding the perpetrator(s)) at Columbine High School in Littleton, Colorado. Post-event, many demanded that government and civilian experts develop a ‘profile’ of the potential school shooter in order to identify and preempt likely future acts of violence. This grounded theory research study seeks to explore the validity of the original hypotheses proposed by the Federal Bureau of Investigation (FBI) in 2000, as it relates to the commonality of disclosure by perpetrators of mass-school-shootings, by evaluating fourteen mass-school-shooting events between 2000 and 2019 at locations around the United States. Methods: The strategy of inquiry seeks to investigate case files, public records, witness accounts, and available psychological profiles of the shooter. The research methodology is inclusive of one-on-one interviews with members of the FBI’s Critical Incident Response Group seeking perspective on commonalities between individuals; specifically, disclosure of intent pre-event. Results: The research determined that school shooters do not ‘unfailingly’ notify others of their plans. However, in nine of the fourteen mass-school-shooting events analyzed, the perpetrator did inform the third party of their intent pre-event in some form of written, oral, or electronic communication. In the remaining five instances, the so-called ‘red-flag’ indicators of the potential for an event to occur were profound, and unto themselves, might be interpreted as notification to others of an imminent deadly threat. Conclusion: Data indicates that conclusions drawn in the FBI’s threat assessment perspective published in 2000 are relevant and current. There is evidence that despite potential ‘red-flag’ indicators which may or may not include a variety of other characteristics, perpetrators of mass-school-shooting events are likely to share their intentions with others through some form of direct or indirect communication. More significantly, implications of this research might suggest that society is often informed of potential danger pre-event but lacks any equitable means by which to disseminate, prevent, intervene, or otherwise act in a meaningful way considering said revelation.Keywords: columbine, FBI profiling, guns, mass shooting, mental health, school violence
Procedia PDF Downloads 1181334 Yield and Physiological Evaluation of Coffee (Coffea arabica L.) in Response to Biochar Applications
Authors: Alefsi D. Sanchez-Reinoso, Leonardo Lombardini, Hermann Restrepo
Abstract:
Colombian coffee is recognized worldwide for its mild flavor and aroma. Its cultivation generates a large amount of waste, such as fresh pulp, which leads to environmental, health, and economic problems. Obtaining biochar (BC) by pyrolysis of coffee pulp and its incorporation to the soil can be a complement to the crop mineral nutrition. The objective was to evaluate the effect of the application of BC obtained from coffee pulp on the physiology and agronomic performance of the Castillo variety coffee crop (Coffea arabica L.). The research was developed in field condition experiment, using a three-year-old commercial coffee crop, carried out in Tolima. Four doses of BC (0, 4, 8 and 16 t ha-1) and four levels of chemical fertilization (CF) (0%, 33%, 66% and 100% of the nutritional requirements) were evaluated. Three groups of variables were recorded during the experiment: i) physiological parameters such as Gas exchange, the maximum quantum yield of PSII (Fv/Fm), biomass, and water status were measured; ii) physical and chemical characteristics of the soil in a commercial coffee crop, and iii) physiochemical and sensorial parameters of roasted beans and coffee beverages. The results indicated that a positive effect was found in plants with 8 t ha-1 BC and fertilization levels of 66 and 100%. Also, a positive effect was observed in coffee trees treated with 8 t ha-1 BC and 100%. In addition, the application of 16 t ha-1 BC increased the soil pHand microbial respiration; reduced the apparent density and state of aggregation of the soil compared to 0 t ha-1 BC. Applications of 8 and 16 t ha-1 BC and 66%-100% chemical fertilization registered greater sensitivity to the aromatic compounds of roasted coffee beans in the electronic nose. Amendments of BC between 8 and 16 t ha-1 and CF between 66% and 100% increased the content of total soluble solids (TSS), reduced the pH, and increased the titratable acidity in beverages of roasted coffee beans. In conclusion, 8 t ha-1 BC of the coffee pulp can be an alternative to supplement the nutrition of coffee seedlings and trees. Applications between 8 and 16 t ha-1 BC support coffee soil management strategies and help the use of solid waste. BC as a complement to chemical fertilization showed a positive effect on the aromatic profile obtained for roasted coffee beans and cup quality attributes.Keywords: crop yield, cup quality, mineral nutrition, pyrolysis, soil amendment
Procedia PDF Downloads 1111333 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.Keywords: classification, achine learning, predictive quality, feature selection
Procedia PDF Downloads 1621332 Determination of the Walkability Comfort for Urban Green Space Using Geographical Information System
Authors: Muge Unal, Cengiz Uslu, Mehmet Faruk Altunkasa
Abstract:
Walkability relates to the ability of the places to connect people with varied destinations within a reasonable amount of time and effort, and to offer visual interest in journeys throughout the network. So, the good quality of the physical environment and arrangement of walkway and sidewalk appear to be more crucial in influencing the pedestrian route choice. Also, proximity, connectivity, and accessibility are significant factor for walkability in terms of an equal opportunity for using public spaces. As a result, there are two important points for walkability. Firstly, the place should have a well-planned street network for accessible and secondly facilitate the pedestrian need for comfort. In this respect, this study aims to examine the both physical and bioclimatic comfort levels of the current condition of pedestrian route with reference to design criteria of a street to access the urban green spaces. These aspects have been identified as the main indicators for walkable streets such as continuity, materials, slope, bioclimatic condition, walkway width, greenery, and surface. Additionally, the aim was to identify the factors that need to be considered in future guidelines and policies for planning and design in urban spaces especially streets. Adana city was chosen as a study area. Adana is a province of Turkey located in south-central Anatolia. This study workflow can be summarized in four stages: (1) environmental and physical data were collected by referred to literature and used in a weighted criteria method to determine the importance level of these data , (2) environmental characteristics of pedestrian routes gained from survey studies are evaluated to hierarchies these criteria of the collected information, (3) and then each pedestrian routes will have a score that provides comfortable access to the park, (4) finally, the comfortable routes to park will be mapped using GIS. It is hoped that this study will provide an insight into future development planning and design to create a friendly and more comfort street environment for the users.Keywords: comfort level, geographical information system (GIS), walkability, weighted criteria method
Procedia PDF Downloads 3111331 DEEPMOTILE: Motility Analysis of Human Spermatozoa Using Deep Learning in Sri Lankan Population
Authors: Chamika Chiran Perera, Dananjaya Perera, Chirath Dasanayake, Banuka Athuraliya
Abstract:
Male infertility is a major problem in the world, and it is a neglected and sensitive health issue in Sri Lanka. It can be determined by analyzing human semen samples. Sperm motility is one of many factors that can evaluate male’s fertility potential. In Sri Lanka, this analysis is performed manually. Manual methods are time consuming and depend on the person, but they are reliable and it can depend on the expert. Machine learning and deep learning technologies are currently being investigated to automate the spermatozoa motility analysis, and these methods are unreliable. These automatic methods tend to produce false positive results and false detection. Current automatic methods support different techniques, and some of them are very expensive. Due to the geographical variance in spermatozoa characteristics, current automatic methods are not reliable for motility analysis in Sri Lanka. The suggested system, DeepMotile, is to explore a method to analyze motility of human spermatozoa automatically and present it to the andrology laboratories to overcome current issues. DeepMotile is a novel deep learning method for analyzing spermatozoa motility parameters in the Sri Lankan population. To implement the current approach, Sri Lanka patient data were collected anonymously as a dataset, and glass slides were used as a low-cost technique to analyze semen samples. Current problem was identified as microscopic object detection and tackling the problem. YOLOv5 was customized and used as the object detector, and it achieved 94 % mAP (mean average precision), 86% Precision, and 90% Recall with the gathered dataset. StrongSORT was used as the object tracker, and it was validated with andrology experts due to the unavailability of annotated ground truth data. Furthermore, this research has identified many potential ways for further investigation, and andrology experts can use this system to analyze motility parameters with realistic accuracy.Keywords: computer vision, deep learning, convolutional neural networks, multi-target tracking, microscopic object detection and tracking, male infertility detection, motility analysis of human spermatozoa
Procedia PDF Downloads 1061330 The Role of User Participation on Social Sustainability: A Case Study on Four Residential Areas
Authors: Hasan Taştan, Ayşen Ciravoğlu
Abstract:
The rapid growth of the human population and the environmental degradation associated with increased consumption of resources raises concerns on sustainability. Social sustainability constitutes one of the three dimensions of sustainability together with environmental and economic dimensions. Even though there is not an agreement on what social sustainability consists of, it is a well known fact that it necessitates user participation. The fore, this study aims to observe and analyze the role of user participation on social sustainability. In this paper, the links between user participation and indicators of social sustainability have been searched. In order to achieve this, first of all a literature review on social sustainability has been done; accordingly, the information obtained from researches has been used in the evaluation of the projects conducted in the developing countries considering user participation. These examples are taken as role models with pros and cons for the development of the checklist for the evaluation of the case studies. Furthermore, a case study over the post earthquake residential settlements in Turkey have been conducted. The case study projects are selected considering different building scales (differing number of residential units), scale of the problem (post-earthquake settlements, rehabilitation of shanty dwellings) and the variety of users (differing socio-economic dimensions). Decisionmaking, design, building and usage processes of the selected projects and actors of these processes have been investigated in the context of social sustainability. The cases include: New Gourna Village by Hassan Fathy, Quinta Monroy dwelling units conducted in Chile by Alejandro Aravena and Beyköy and Beriköy projects in Turkey aiming to solve the problem of housing which have appeared after the earthquake happened in 1999 have been investigated. Results of the study possible links between social sustainability indicators and user participation and links between user participation and the peculiarities of place. Results are compared and discussed in order to find possible solutions to form social sustainability through user participation. Results show that social sustainability issues depend on communities' characteristics, socio-economic conditions and user profile but user participation has positive effects on some social sustainability indicators like user satisfaction, a sense of belonging and social stability.Keywords: housing projects, residential areas, social sustainability, user participation
Procedia PDF Downloads 3911329 Web Map Service for Fragmentary Rockfall Inventory
Authors: M. Amparo Nunez-Andres, Nieves Lantada
Abstract:
One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.Keywords: geological risk, web mapping, WMS, rockfalls
Procedia PDF Downloads 1601328 Role of Internal and External Factors in Preventing Risky Sexual Behavior, Drug and Alcohol Abuse
Authors: Veronika Sharok
Abstract:
Research relevance on psychological determinants of risky behaviors is caused by high prevalence of such behaviors, particularly among youth. Risky sexual behavior, including unprotected and casual sex, frequent change of sexual partners, drug and alcohol use lead to negative social consequences and contribute to the spread of HIV infection and other sexually transmitted diseases. Data were obtained from 302 respondents aged 15-35 which were divided into 3 empirical groups: persons prone to risky sexual behavior, drug users and alcohol users; and 3 control groups: the individuals who are not prone to risky sexual behavior, persons who do not use drugs and the respondents who do not use alcohol. For processing, we used the following methods: Qualitative method for nominative data (Chi-squared test) and quantitative methods for metric data (student's t-test, Fisher's F-test, Pearson's r correlation test). Statistical processing was performed using Statistica 6.0 software. The study identifies two groups of factors that prevent risky behaviors. Internal factors, which include the moral and value attitudes; significance of existential values: love, life, self-actualization and search for the meaning of life; understanding independence as a responsibility for the freedom and ability to get attached to someone or something up to a point when this relationship starts restricting the freedom and becomes vital; awareness of risky behaviors as dangerous for the person and for others; self-acknowledgement. External factors (prevent risky behaviors in case of absence of the internal ones): absence of risky behaviors among friends and relatives; socio-demographic characteristics (middle class, marital status); awareness about the negative consequences of risky behaviors; inaccessibility to psychoactive substances. These factors are common for proneness to each type of risky behavior, because it usually caused by the same reasons. It should be noted that if prevention of risky behavior is based only on elimination of external factors, it is not as effective as it may be if we pay more attention to internal factors. The results obtained in the study can be used to develop training programs and activities for prevention of risky behaviors, for using values preventing such behaviors and promoting healthy lifestyle.Keywords: existential values, prevention, psychological features, risky behavior
Procedia PDF Downloads 2561327 Pediatric Hearing Aid Use: A Study Based on Data Logging Information
Authors: Mina Salamatmanesh, Elizabeth Fitzpatrick, Tim Ramsay, Josee Lagacé, Lindsey Sikora, JoAnne Whittingham
Abstract:
Introduction: Hearing loss (HL) is one of the most common disorders that presents at birth and in early childhood. Universal newborn hearing screening (UNHS) has been adopted based on the assumption that with early identification of HL, children will have access to optimal amplification and intervention at younger ages, therefore, taking advantage of the brain’s maximal plasticity. One particular challenge for parents in the early years is achieving consistent hearing aid (HA) use which is critical to the child’s development and constitutes the first step in the rehabilitation process. This study examined the consistency of hearing aid use in young children based on data logging information documented during audiology sessions in the first three years after hearing aid fitting. Methodology: The first 100 children who were diagnosed with bilateral HL before 72 months of age since 2003 to 2015 in a pediatric audiology clinic and who had at least two hearing aid follow-up sessions with available data logging information were included in the study. Data from each audiology session (age of child at the session, average hours of use per day (for each ear) in the first three years after HA fitting) were collected. Clinical characteristics (degree of hearing loss, age of HA fitting) were also documented to further understanding of factors that impact HA use. Results: Preliminary analysis of the results of the first 20 children shows that all of them (100%) have at least one data logging session recorded in the clinical audiology system (Noah). Of the 20 children, 17(85%) have three data logging events recorded in the first three years after HA fitting. Based on the statistical analysis of the first 20 cases, the median hours of use in the first follow-up session after the hearing aid fitting in the right ear is 3.9 hours with an interquartile range (IQR) of 10.2h. For the left ear the median is 4.4 and the IQR is 9.7h. In the first session 47% of the children use their hearing aids ≤5 hours, 12% use them between 5 to 10 hours and 22% use them ≥10 hours a day. However, these children showed increased use by the third follow-up session with a median (IQR) of 9.1 hours for the right ear and 2.5, and of 8.2 hours for left ear (IQR) IQR is 5.6 By the third follow-up session, 14% of children used hearing aids ≤5 hours, while 38% of children used them ≥10 hours. Based on the primary results, factors like age and level of HL significantly impact the hours of use. Conclusion: The use of data logging information to assess the actual hours of HA provides an opportunity to examine the: a) challenges of families of young children with HAs, b) factors that impact use in very young children. Data logging when used collaboratively with parents, can be a powerful tool to identify problems and to encourage and assist families in maximizing their child’s hearing potential.Keywords: hearing loss, hearing aid, data logging, hours of use
Procedia PDF Downloads 2301326 The Context of Teaching and Learning Primary Science to Gifted Students: An Analysis of Australian Curriculum and New South Wales Science Syllabus
Authors: Rashedul Islam
Abstract:
A firmly-validated aim of teaching science is to support student enthusiasm for science learning with an outspread interest in scientific issues in future life. This is in keeping with the recent development in Gifted and Talented Education statement which instructs that gifted students have a renewed interest and natural aptitude in science. Yet, the practice of science teaching leaves many students with the feeling that science is difficult and compared to other school subjects, students interest in science is declining at the final years of the primary school. As a curriculum guides the teaching-learning activities in school, where significant consequences may result from the context of the curricula and syllabi, are a major feature of certain educational jurisdictions in NSW, Australia. The purpose of this study was an exploration of the curriculum sets the context to identify how science education is practiced through primary schools in Sydney, Australia. This phenomenon was explored through document review from two publicly available documents namely: the NSW Science Syllabus K-6, and Australian Curriculum: Foundation - 10 Science. To analyse the data, this qualitative study applied themed content analysis at three different levels, i.e., first cycle coding, second cycle coding- pattern codes, and thematic analysis. Preliminary analysis revealed the phenomenon of teaching-learning practices drawn from eight themes under three phenomena aligned with teachers’ practices and gifted student’s learning characteristics based on Gagné’s Differentiated Model of Gifted and Talent (DMGT). From the results, it appears that, overall, the two documents are relatively well-placed in terms of identifying the context of teaching and learning primary science to gifted students. However, educators need to make themselves aware of the ways in which the curriculum needs to be adapted to meet gifted students learning needs in science. It explores the important phenomena of teaching-learning context to provide gifted students with optimal educational practices including inquiry-based learning, problem-solving, open-ended tasks, creativity in science, higher order thinking, integration, and challenges. The significance of such a study lies in its potential to schools and further research in the field of gifted education.Keywords: teaching primary science, gifted student learning, curriculum context, science syllabi, Australia
Procedia PDF Downloads 4211325 Assessment of Hypersaline Outfalls via Computational Fluid Dynamics Simulations: A Case Study of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser
Authors: Mitchell J. Baum, Badin Gibbes, Greg Collecutt
Abstract:
This study details a three-dimensional field-scale numerical investigation conducted for the Gold Coast Desalination Plant (GCDP) offshore multiport brine diffuser. Quantitative assessment of diffuser performance with regard to trajectory, dilution and mapping of seafloor concentration distributions was conducted for 100% plant operation. The quasi-steady Computational Fluid Dynamics (CFD) simulations were performed using the Reynolds averaged Navier-Stokes equations with a k-ω shear stress transport turbulence closure scheme. The study compliments a field investigation, which measured brine plume characteristics under similar conditions. CFD models used an iterative mesh in a domain with dimensions 400 m long, 200 m wide and an average depth of 24.2 m. Acoustic Doppler current profiler measurements conducted in the companion field study exhibited considerable variability over the water column. The effect of this vertical variability on simulated discharge outcomes was examined. Seafloor slope was also accommodated into the model. Ambient currents varied predominantly in the longshore direction – perpendicular to the diffuser structure. Under these conditions, the alternating port orientation of the GCDP diffuser resulted in simultaneous subjection to co-propagating and counter-propagating ambient regimes. Results from quiescent ambient simulations suggest broad agreement with empirical scaling arguments traditionally employed in design and regulatory assessments. Simulated dynamic ambient regimes showed the influence of ambient crossflow upon jet trajectory, dilution and seafloor concentration is significant. The effect of ambient flow structure and the subsequent influence on jet dynamics is discussed, along with the implications for using these different simulation approaches to inform regulatory decisions.Keywords: computational fluid dynamics, desalination, field-scale simulation, multiport brine diffuser, negatively buoyant jet
Procedia PDF Downloads 2141324 Effects of Fe Addition and Process Parameters on the Wear and Corrosion Characteristics of Icosahedral Al-Cu-Fe Coatings on Ti-6Al-4V Alloy
Authors: Olawale S. Fatoba, Stephen A. Akinlabi, Esther T. Akinlabi, Rezvan Gharehbaghi
Abstract:
The performance of material surface under wear and corrosion environments cannot be fulfilled by the conventional surface modifications and coatings. Therefore, different industrial sectors need an alternative technique for enhanced surface properties. Titanium and its alloys possess poor tribological properties which limit their use in certain industries. This paper focuses on the effect of hybrid coatings Al-Cu-Fe on a grade five titanium alloy using laser metal deposition (LMD) process. Icosahedral Al-Cu-Fe as quasicrystals is a relatively new class of materials which exhibit unusual atomic structure and useful physical and chemical properties. A 3kW continuous wave ytterbium laser system (YLS) attached to a KUKA robot which controls the movement of the cladding process was utilized for the fabrication of the coatings. The titanium cladded surfaces were investigated for its hardness, corrosion and tribological behaviour at different laser processing conditions. The samples were cut to corrosion coupons, and immersed into 3.65% NaCl solution at 28oC using Electrochemical Impedance Spectroscopy (EIS) and Linear Polarization (LP) techniques. The cross-sectional view of the samples was analysed. It was found that the geometrical properties of the deposits such as width, height and the Heat Affected Zone (HAZ) of each sample remarkably increased with increasing laser power due to the laser-material interaction. It was observed that there are higher number of aluminum and titanium presented in the formation of the composite. The indentation testing reveals that for both scanning speed of 0.8 m/min and 1m/min, the mean hardness value decreases with increasing laser power. The low coefficient of friction, excellent wear resistance and high microhardness were attributed to the formation of hard intermetallic compounds (TiCu, Ti2Cu, Ti3Al, Al3Ti) produced through the in situ metallurgical reactions during the LMD process. The load-bearing capability of the substrate was improved due to the excellent wear resistance of the coatings. The cladded layer showed a uniform crack free surface due to optimized laser process parameters which led to the refinement of the coatings.Keywords: Al-Cu-Fe coating, corrosion, intermetallics, laser metal deposition, Ti-6Al-4V alloy, wear resistance
Procedia PDF Downloads 1781323 Characterization of Kevlar 29 for Multifunction Applications
Authors: Doaa H. Elgohary, Dina M. Hamoda, S. Yahia
Abstract:
Technical textiles refer to textile materials that are engineered and designed to have specific functionalities and performance characteristics beyond their traditional use as apparel or upholstery fabrics. These textiles are usually developed for their unique properties such as strength, durability, flame retardancy, chemical resistance, waterproofing, insulation and other special properties. The development and use of technical textiles are constantly evolving, driven by advances in materials science, manufacturing technologies and the demand for innovative solutions in various industries. Kevlar 29 is a type of aramid fiber developed by DuPont. It is a high-performance material known for its exceptional strength and resistance to impact, abrasion, and heat. Kevlar 29 belongs to the Kevlar family, which includes different types of aramid fibers. Kevlar 29 is primarily used in applications that require strength and durability, such as ballistic protection, body armor, and body armor for military and law enforcement personnel. It is also used in the aerospace and automotive industries to reinforce composite materials, as well as in various industrial applications. Two different Kevlar samples were used coated with cooper lithium silicate (CLS); ten different mechanical and physical properties (weight, thickness, tensile strength, elongation, stiffness, air permeability, puncture resistance, thermal conductivity, stiffness, and spray test) were conducted to approve its functional performance efficiency. The influence of different mechanical properties was statistically analyzed using an independent t-test with a significant difference at P-value = 0.05. The radar plot was calculated and evaluated to determine the best-performing samples. The results of the independent t-test observed that all variables were significantly affected by yarn counts except water permeability, which has no significant effect. All properties were evaluated for samples 1 and 2, a radar chart was used to determine the best attitude for samples. The radar chart area was calculated, which shows that sample 1 recorded the best performance, followed by sample 2. The surface morphology of all samples and the coating materials was determined using a scanning electron microscope (SEM), also Fourier Transform Infrared Spectroscopy Measurement for the two samples.Keywords: cooper lithium silicate, independent t-test, kevlar, technical textiles.
Procedia PDF Downloads 801322 Bio-Remediation of Lead-Contaminated Water Using Adsorbent Derived from Papaya Peel
Authors: Sahar Abbaszadeh, Sharifah Rafidah Wan Alwi, Colin Webb, Nahid Ghasemi, Ida Idayu Muhamad
Abstract:
Toxic heavy metal discharges into environment due to rapid industrialization is a serious pollution problem that has drawn global attention towards their adverse impacts on both the structure of ecological systems as well as human health. Lead as toxic and bio-accumulating elements through the food chain, is regularly entering to water bodies from discharges of industries such as plating, mining activities, battery manufacture, paint manufacture, etc. The application of conventional methods to degrease and remove Pb(II) ion from wastewater is often restricted due to technical and economic constrains. Therefore, the use of various agro-wastes as low-cost bioadsorbent is found to be attractive since they are abundantly available and cheap. In this study, activated carbon of papaya peel (AC-PP) (as locally available agricultural waste) was employed to evaluate its Pb(II) uptake capacity from single-solute solutions in sets of batch mode experiments. To assess the surface characteristics of the adsorbents, the scanning electron microscope (SEM) coupled with energy disperse X-ray (EDX), and Fourier transform infrared spectroscopy (FT-IR) analysis were utilized. The removal amount of Pb(II) was determined by atomic adsorption spectrometry (AAS). The effects of pH, contact time, the initial concentration of Pb(II) and adsorbent dosage were investigated. The pH value = 5 was observed as optimum solution pH. The optimum initial concentration of Pb(II) in the solution for AC-PP was found to be 200 mg/l where the amount of Pb(II) removed was 36.42 mg/g. At the agitating time of 2 h, the adsorption processes using 100 mg dosage of AC-PP reached equilibrium. The experimental results exhibit high capability and metal affinity of modified papaya peel waste with removal efficiency of 93.22 %. The evaluation results show that the equilibrium adsorption of Pb(II) was best expressed by Freundlich isotherm model (R2 > 0.93). The experimental results confirmed that AC-PP potentially can be employed as an alternative adsorbent for Pb(II) uptake from industrial wastewater for the design of an environmentally friendly yet economical wastewater treatment process.Keywords: activated carbon, bioadsorption, lead removal, papaya peel, wastewater treatment
Procedia PDF Downloads 2861321 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 1601320 Preliminary Report on the Assessment of the Impact of the Kinesiology Taping Application versus Placebo Taping on the Knee Joint Position Sense
Authors: Anna Hadamus, Patryk Wasowski, Anna Mosiolek, Zbigniew Wronski, Sebastian Wojtowicz, Dariusz Bialoszewski
Abstract:
Introduction: Kinesiology Taping is a very popular physiotherapy method, often used for healthy people, especially athletes, in order to stimulate the muscles and improve their performance. The aim of this study was to determine the effect of the muscle application of Kinesiology Taping on the joint position sense in active motion. Material and Methods: The study involved 50 healthy people - 30 men and 20 women, mean age was 23.2 years (range 18-30 years). The exclusion criteria were injuries and operations of the knee, which could affect the test results. The participants were divided randomly into two equal groups. The first group consisted of individuals with the applied Kinesiology Taping muscle application (KT group), whereas in the rest of the individuals placebo application from red adhesive tape was used (placebo group). Both applications were to enhance the effects of quadriceps muscle activity. Joint position sense (JPS) was evaluated in this study. Error of Active Reproduction of the Joint Position (EARJP) of the knee was measured in 45° flexion. The test was performed prior to applying the patch, with the applied application, then 24 hours after wearing, and after removing the tape. The interval between trials was not less than 30 minutes. Statistical analysis was performed using Statistica 12.0. We calculated distribution characteristics, Wilcoxon test, Friedman‘s ANOVA and Mann-Whitney U test. Results. In the KT group and the placebo group average test score of JPS before applying application KT were 3.48° and 5.16° respectively, after its application it was 4.84° and 4.88°, then after 24 hours of experiment JPS was 5.12° and 4.96°, and after application removal we measured 3.84° and 5.12° respectively. Differences over time in any of the groups were not statistically significant. There were also no significant differences between the groups. Conclusions: 1. Applying Kinesiology Taping to quadriceps muscle had no significant effect on the knee joint proprioception. Its use in order to improve sensorimitor skills seems therefore to be unreasonable. 2. No differences between applications of KT and placebo indicates that the clinical effect of stretch tape is minimal or absent. 3. The results are the basis for the continuation of prospective, randomized trials of numerous study groups.Keywords: joint position sense, kinesiology taping, kinesiotaping, knee
Procedia PDF Downloads 3391319 In vitro Study of Laser Diode Radiation Effect on the Photo-Damage of MCF-7 and MCF-10A Cell Clusters
Authors: A. Dashti, M. Eskandari, L. Farahmand, P. Parvin, A. Jafargholi
Abstract:
Breast Cancer is one of the most considerable diseases in the United States and other countries and is the second leading cause of death in women. Common breast cancer treatments would lead to adverse side effects such as loss of hair, nausea, and weakness. These complications arise because these cancer treatments damage some healthy cells while eliminating the cancer cells. In an effort to address these complications, laser radiation was utilized and tested as a targeted cancer treatment for breast cancer. In this regard, tissue engineering approaches are being employed by using an electrospun scaffold in order to facilitate the growth of breast cancer cells. Polycaprolacton (PCL) was used as a material for scaffold fabricating because of its biocompatibility, biodegradability, and supporting cell growth. The specific breast cancer cells have the ability to create a three-dimensional cell cluster due to the spontaneous accumulation of cells in the porosity of the scaffold under some specific conditions. Therefore, we are looking for a higher density of porosity and larger pore size. Fibers showed uniform diameter distribution and final scaffold had optimum characteristics with approximately 40% porosity. The images were taken by SEM and the density and the size of the porosity were determined with the Image. After scaffold preparation, it has cross-linked by glutaraldehyde. Then, it has been washed with glycine and phosphate buffer saline (PBS), in order to neutralize the residual glutaraldehyde. 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromidefor (MTT) results have represented approximately 91.13% viability of the scaffolds for cancer cells. In order to create a cluster, Michigan Cancer Foundation-7 (MCF-7, breast cancer cell line) and Michigan Cancer Foundation-10A (MCF-10A, human mammary epithelial cell line) cells were cultured on the scaffold in 24 well plate for five days. Then, we have exposed the cluster to the laser diode 808 nm radiation to investigate the effect of laser on the tumor with different power and time. Under the same conditions, cancer cells lost their viability more than the healthy ones. In conclusion, laser therapy is a viable method to destroy the target cells and has a minimum effect on the healthy tissues and cells and it can improve the other method of cancer treatments limitations.Keywords: breast cancer, electrospun scaffold, polycaprolacton, laser diode, cancer treatment
Procedia PDF Downloads 1431318 Structural Analysis of Phase Transformation and Particle Formation in Metastable Metallic Thin Films Grown by Plasma-Enhanced Atomic Layer Deposition
Authors: Pouyan Motamedi, Ken Bosnick, Ken Cadien, James Hogan
Abstract:
Growth of conformal ultrathin metal films has attracted a considerable amount of attention recently. Plasma-enhanced atomic layer deposition (PEALD) is a method capable of growing conformal thin films at low temperatures, with an exemplary control over thickness. The authors have recently reported on growth of metastable epitaxial nickel thin films via PEALD, along with a comprehensive characterization of the films and a study on the relationship between the growth parameters and the film characteristics. The goal of the current study is to use the mentioned films as a case study to investigate the temperature-activated phase transformation and agglomeration in ultrathin metallic films. For this purpose, metastable hexagonal nickel thin films were annealed using a controlled heating/cooling apparatus. The transformations in the crystal structure were observed via in-situ synchrotron x-ray diffraction. The samples were annealed to various temperatures in the range of 400-1100° C. The onset and progression of particle formation were studied in-situ via laser measurements. In addition, a four-point probe measurement tool was used to record the changes in the resistivity of the films, which is affected by phase transformation, as well as roughening and agglomeration. Thin films annealed at various temperature steps were then studied via atomic force microscopy, scanning electron microscopy and high-resolution transmission electron microscopy, in order to get a better understanding of the correlated mechanisms, through which phase transformation and particle formation occur. The results indicate that the onset of hcp-to-bcc transformation is at 400°C, while particle formations commences at 590° C. If the annealed films are quenched after transformation, but prior to agglomeration, they show a noticeable drop in resistivity. This can be attributed to the fact that the hcp films are grown epitaxially, and are under severe tensile strain, and annealing leads to relaxation of the mismatch strain. In general, the results shed light on the nature of structural transformation in nickel thin films, as well as metallic thin films, in general.Keywords: atomic layer deposition, metastable, nickel, phase transformation, thin film
Procedia PDF Downloads 3291317 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence
Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács
Abstract:
The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility
Procedia PDF Downloads 118