Search results for: finite volume
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32827

Search results for: finite volume

727 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
726 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis

Abstract:

E-maintenance is a relatively recent concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This is clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification, cellular connectivity, connectivity to the vehicle computer, and connectivity to analog and digital sensors by means of a specially targeted design of expansion board. Specifically, the latter offers a number of adaptability features to cope with the diverse sensor types employed in different vehicles. In standard mode, the IoT sensor node communicates to the data center through cellular network, transmitting all digital/digitized sensor data, IoT device identity and position. Moreover, the proposed IoT sensor node offers connectivity, through WiFi and an appropriate application, to smart phones or tablets allowing the registration of additional vehicle- and driver-specific information and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware.

Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 506
725 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56
724 Evaluating the Small-Strain Mechanical Properties of Cement-Treated Clayey Soils Based on the Confining Pressure

Authors: M. A. Putera, N. Yasufuku, A. Alowaisy, R. Ishikura, J. G. Hussary, A. Rifa’i

Abstract:

Indonesia’s government has planned a project for a high-speed railway connecting the capital cities, Jakarta and Surabaya, about 700 km. Based on that location, it has been planning construction above the lowland soil region. The lowland soil region comprises cohesive soil with high water content and high compressibility index, which in fact, led to a settlement problem. Among the variety of railway track structures, the adoption of the ballastless track was used effectively to reduce the settlement; it provided a lightweight structure and minimized workspace. Contradictorily, deploying this thin layer structure above the lowland area was compensated with several problems, such as lack of bearing capacity and deflection behavior during traffic loading. It is necessary to combine with ground improvement to assure a settlement behavior on the clayey soil. Reflecting on the assurance of strength increment and working period, those were convinced by adopting methods such as cement-treated soil as the substructure of railway track. Particularly, evaluating mechanical properties in the field has been well known by using the plate load test and cone penetration test. However, observing an increment of mechanical properties has uncertainty, especially for evaluating cement-treated soil on the substructure. The current quality control of cement-treated soils was established by laboratory tests. Moreover, using small strain devices measurement in the laboratory can predict more reliable results that are identical to field measurement tests. Aims of this research are to show an intercorrelation of confining pressure with the initial condition of the Young’s modulus (E0), Poisson ratio (υ0) and Shear modulus (G0) within small strain ranges. Furthermore, discrepancies between those parameters were also investigated. Experimental result confirmed the intercorrelation between cement content and confining pressure with a power function. In addition, higher cement ratios have discrepancies, conversely with low mixing ratios.

Keywords: Cement content, confining pressure, high-speed railway, small strain ranges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 355
723 Model of Community Management for Sustainable Utilization

Authors: Luedech Girdwichai, Witthaya Mekhum

Abstract:

This research intended to develop the model of community management for sustainable utilization by investigating on 2 groups of population, the family heads and the community management team. The population of the former group consisted of family heads from 511 families in 12 areas to complete the questionnaires which were returned at 479 sets. The latter group consisted of the community management team of 12 areas with 1 representative from each area to give the interview. The questionnaires for the family heads consisted of 2 main parts; general information such as occupations, etc. in the form of checklist. The second part dealt with the data on self reliance community development based on 4P Framework, i.e., People (human resource) development, Place (area) development, Product (economic and income source) development, and Plan (community plan) development in the form of rating scales. Data in the 1st part were calculated to find frequency and percentage while those in the 2nd part were analyzed to find arithmetic mean and SD. Data from the 2nd group of population or the community management team were derived from focus group to find factors influencing successful management together with the in depth interview which were analyzed by descriptive statistics. The results showed that 479 family heads reported that the aspect on the implementation of community plan to self reliance community activities based on Sufficient Economy Philosophy and the 4P was at the average of 3.28 or moderate level. When considering in details, it was found that the 1st aspect was on the area development with the mean of 3.71 or high level followed by human resource development with the mean of 3.44 or moderate level, then, economic and source of income development with the mean of 3.09 or moderate level. The last aspect was community plan development with the mean of 2.89. The results from the small group discussion revealed some factors and guidelines for successful community management as follows: 1) on the People (human resource) development aspect, there was a project to support and develop community leaders. 2) On the aspect of Place (area) development, there was a development on conservative tourism areas. 3) On the aspect of Product (economic and source of income) development, the community leaders promoted the setting of occupational group, saving group, and product processing group. 4) On the aspect of Plan (community plan) development, there was a prioritization through public hearing.

Keywords: Model of community management, sustainable utilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464
722 Rice Area Determination Using Landsat-Based Indices and Land Surface Temperature Values

Authors: Burçin Saltık, Levent Genç

Abstract:

In this study, it was aimed to determine a route for identification of rice cultivation areas within Thrace and Marmara regions of Turkey using remote sensing and GIS. Landsat 8 (OLI-TIRS) imageries acquired in production season of 2013 with 181/32 Path/Row number were used. Four different seasonal images were generated utilizing original bands and different transformation techniques. All images were classified individually using supervised classification techniques and Land Use Land Cover Maps (LULC) were generated with 8 classes. Areas (ha, %) of each classes were calculated. In addition, district-based rice distribution maps were developed and results of these maps were compared with Turkish Statistical Institute (TurkSTAT; TSI)’s actual rice cultivation area records. Accuracy assessments were conducted, and most accurate map was selected depending on accuracy assessment and coherency with TSI results. Additionally, rice areas on over 4° slope values were considered as mis-classified pixels and they eliminated using slope map and GIS tools. Finally, randomized rice zones were selected to obtain maximum-minimum value ranges of each date (May, June, July, August, September images separately) NDVI, LSWI, and LST images to test whether they may be used for rice area determination via raster calculator tool of ArcGIS. The most accurate classification for rice determination was obtained from seasonal LSWI LULC map, and considering TSI data and accuracy assessment results and mis-classified pixels were eliminated from this map. According to results, 83151.5 ha of rice areas exist within study area. However, this result is higher than TSI records with an area of 12702.3 ha. Use of maximum-minimum range of rice area NDVI, LSWI, and LST was tested in Meric district. It was seen that using the value ranges obtained from July imagery, gave the closest results to TSI records, and the difference was only 206.4 ha. This difference is normal due to relatively low resolution of images. Thus, employment of images with higher spectral, spatial, temporal and radiometric resolutions may provide more reliable results.

Keywords: Landsat 8 (OLI-TIRS), LULC, spectral indices, rice.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1260
721 Machine Learning Techniques for Short-Term Rain Forecasting System in the Northeastern Part of Thailand

Authors: Lily Ingsrisawang, Supawadee Ingsriswang, Saisuda Somchit, Prasert Aungsuratana, Warawut Khantiyanan

Abstract:

This paper presents the methodology from machine learning approaches for short-term rain forecasting system. Decision Tree, Artificial Neural Network (ANN), and Support Vector Machine (SVM) were applied to develop classification and prediction models for rainfall forecasts. The goals of this presentation are to demonstrate (1) how feature selection can be used to identify the relationships between rainfall occurrences and other weather conditions and (2) what models can be developed and deployed for predicting the accurate rainfall estimates to support the decisions to launch the cloud seeding operations in the northeastern part of Thailand. Datasets collected during 2004-2006 from the Chalermprakiat Royal Rain Making Research Center at Hua Hin, Prachuap Khiri khan, the Chalermprakiat Royal Rain Making Research Center at Pimai, Nakhon Ratchasima and Thai Meteorological Department (TMD). A total of 179 records with 57 features was merged and matched by unique date. There are three main parts in this work. Firstly, a decision tree induction algorithm (C4.5) was used to classify the rain status into either rain or no-rain. The overall accuracy of classification tree achieves 94.41% with the five-fold cross validation. The C4.5 algorithm was also used to classify the rain amount into three classes as no-rain (0-0.1 mm.), few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall accuracy of classification tree achieves 62.57%. Secondly, an ANN was applied to predict the rainfall amount and the root mean square error (RMSE) were used to measure the training and testing errors of the ANN. It is found that the ANN yields a lower RMSE at 0.171 for daily rainfall estimates, when compared to next-day and next-2-day estimation. Thirdly, the ANN and SVM techniques were also used to classify the rain amount into three classes as no-rain, few-rain, and moderate-rain as above. The results achieved in 68.15% and 69.10% of overall accuracy of same-day prediction for the ANN and SVM models, respectively. The obtained results illustrated the comparison of the predictive power of different methods for rainfall estimation.

Keywords: Machine learning, decision tree, artificial neural network, support vector machine, root mean square error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3179
720 Effects of the Coagulation Bath and Reduction Process on SO2 Adsorption Capacity of Graphene Oxide Fiber

Authors: Özge Alptoğa, Nuray Uçar, Nilgün Karatepe Yavuz, Ayşen Önen

Abstract:

Sulfur dioxide (SO2) is a very toxic air pollutant gas and it causes the greenhouse effect, photochemical smog, and acid rain, which threaten human health severely. Thus, the capture of SO2 gas is very important for the environment. Graphene which is two-dimensional material has excellent mechanical, chemical, thermal properties, and many application areas such as energy storage devices, gas adsorption, sensing devices, and optical electronics. Further, graphene oxide (GO) is examined as a good adsorbent because of its important features such as functional groups (epoxy, carboxyl and hydroxyl) on the surface and layered structure. The SO2 adsorption properties of the fibers are usually investigated on carbon fibers. In this study, potential adsorption capacity of GO fibers was researched. GO dispersion was first obtained with Hummers’ method from graphite, and then GO fibers were obtained via wet spinning process. These fibers were converted into a disc shape, dried, and then subjected to SO2 gas adsorption test. The SO2 gas adsorption capacity of GO fiber discs was investigated in the fields of utilization of different coagulation baths and reduction by hydrazine hydrate. As coagulation baths, single and triple baths were used. In single bath, only ethanol and CaCl2 (calcium chloride) salt were added. In triple bath, each bath has a different concentration of water/ethanol and CaCl2 salt, and the disc obtained from triple bath has been called as reference disk. The fibers which were produced with single bath were flexible and rough, and the analyses show that they had higher SO2 adsorption capacity than triple bath fibers (reference disk). However, the reduction process did not increase the adsorption capacity, because the SEM images showed that the layers and uniform structure in the fiber form were damaged, and reduction decreased the functional groups which SO2 will be attached. Scanning Electron Microscopy (SEM), Fourier Transform Infrared Spectroscopy (FTIR), X-Ray Diffraction (XRD) analyzes were performed on the fibers and discs, and the effects on the results were interpreted. In the future applications of the study, it is aimed that subjects such as pH and additives will be examined.

Keywords: Coagulation bath, graphene oxide fiber, reduction, SO2 gas adsorption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1134
719 In vitro Study of Laser Diode Radiation Effect on the Photo-Damage of MCF-7 and MCF-10A Cell Clusters

Authors: A. Dashti, M. Eskandari, L. Farahmand, P. Parvin, A. Jafargholi

Abstract:

Breast Cancer is one of the most considerable diseases in the United States and other countries and is the second leading cause of death in women. Common breast cancer treatments would lead to adverse side effects such as loss of hair, nausea, and weakness. These complications arise because these cancer treatments damage some healthy cells while eliminating the cancer cells. In an effort to address these complications, laser radiation was utilized and tested as a targeted cancer treatment for breast cancer. In this regard, tissue engineering approaches are being employed by using an electrospun scaffold in order to facilitate the growth of breast cancer cells. Polycaprolacton (PCL) was used as a material for scaffold fabricating because of its biocompatibility, biodegradability, and supporting cell growth. The specific breast cancer cells have the ability to create a three-dimensional cell cluster due to the spontaneous accumulation of cells in the porosity of the scaffold under some specific conditions. Therefore, we are looking for a higher density of porosity and larger pore size. Fibers showed uniform diameter distribution and final scaffold had optimum characteristics with approximately 40% porosity. The images were taken by SEM and the density and the size of the porosity were determined with the Image. After scaffold preparation, it has cross-linked by glutaraldehyde. Then, it has been washed with glycine and phosphate buffer saline (PBS), in order to neutralize the residual glutaraldehyde. 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromidefor (MTT) results have represented approximately 91.13% viability of the scaffolds for cancer cells. In order to create a cluster, Michigan Cancer Foundation-7 (MCF-7, breast cancer cell line) and Michigan Cancer Foundation-10A (MCF-10A, human mammary epithelial cell line) cells were cultured on the scaffold in 24 well plate for five days. Then, we have exposed the cluster to the laser diode 808 nm radiation to investigate the effect of laser on the tumor with different power and time. Under the same conditions, cancer cells lost their viability more than the healthy ones. In conclusion, laser therapy is a viable method to destroy the target cells and has a minimum effect on the healthy tissues and cells and it can improve the other method of cancer treatments limitations.

Keywords: Breast cancer, electrospun scaffold, polycaprolacton, laser diode, cancer treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
718 Causes of Slum Emergence from Decently Built Government's Affordable Housing Projects in Enugu, Nigeria: The Experts’ Perspectives

Authors: Anthony Ikechukwu Agboeze, Walter Timo de Vries, Pamela Durán-Díaz

Abstract:

Since attaining urban status, the population of Enugu, Nigeria, has continued to grow rapidly, leading to growing demands for housing by the teeming population which is predominantly low income. Several government dispensations have developed various affordable housing projects to help deliver decent housing to the Enugu populace. However, over a long period of usage, some of those housing projects in Enugu are unabatedly deteriorating into slums alongside rising housing deficits which has remained problematic for most Nigerian urban centers to address. Emerging from a literature review, this research posits that the link between slum and affordable housing is that both the seekers of affordable housing and slum housing are the low-income earners. This research further investigated the possible causalities of slum emergence from decently built affordable housing projects in Enugu, Nigeria. To do so, we first analyzed the Nigerian housing policy to examine how the policy addresses slum prevention. We further conducted semi-structured expert interviews (qualitative) to sample the views of private housing developers on the degeneration of government housing projects into slums in Enugu, Nigeria. Findings from the housing policy analysis suggest that the housing policy itself is not legally binding on anybody to implement. Sequel to this non-compulsory nature of the housing policy is the poor/non-implementation of the Nigerian housing policy, leading to a constant tendency by the government developers (contractors) to deliver potential slums. The expert respondents corroborated this viewpoint by suggesting that poor planning (including designs of the housing units and the master plan) and poor management (including non-maintenance, poor documentation, and inaccurate housing inventory) are germane to the emergence of slums from affordable housings. This research recommends periodic auditing of delivered housing projects to evaluate the developers’ adherence to the housing policy guidelines – it proposes incentives to policy adherents since the housing policy is not legally binding. We also recommend a participatory management to engage the occupants in the monitoring and reporting of breakdowns in the housing properties – to help improve the quality of management and maintenance to have slum-free settlements.

Keywords: Affordable housing, Enugu, low income, Nigeria, slum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 270
717 The Effect of Simulated Acid Rain on Glycine max

Authors: Nilima Gajbhiye

Abstract:

Acid rain occurs when sulphur dioxide (SO2) and nitrogen oxides (Nox) gases react in the atmosphere with water, oxygen, and other chemicals to form various acidic compounds. The result is a mild solution of sulfuric acid and nitric acid. Soil has a greater buffering capacity than aquatic systems. However excessive amount of acids introduced by acid rains may disturb the entire soil chemistry. Acidity and harmful action of toxic elements damage vegetation while susceptible microbial species are eliminated. In present study, the effects of simulated sulphuric acid and nitric acid rains were investigated on crop Glycine max. The effect of acid rain on change in soil fertility was detected in which pH of control sample was 6.5 and pH of 1%H2SO4 and 1%HNO3 were 3.5. Nitrogen nitrate in soil was high in 1% HNO3 treated soil & Control sample. Ammonium nitrogen in soil was low in 1% HNO3 & H2SO4 treated soil. Ammonium nitrogen was medium in control and other samples. The effect of acid rain on seed germination on 3rd day of germination control sample growth was 7 cm, 0.1% HNO3 was 8cm, and 0.001% HNO3 & 0.001% H2SO4 was 6cm each. On 10th day fungal growth was observed in 1% and 0.1%H2SO4 concentrations, when all plants were dead. The effect of acid rain on crop productivity was investigated on 3rd day roots were developed in plants. On12th day Glycine max showed more growth in 0.1% HNO3, 0.001% HNO3 and 0.001% H2SO4 treated plants growth were same as compare to control plants. On 20th day development of discoloration of plant pigments were observed on acid treated plants leaves. On 38th day, 0.1, 0.001% HNO3 and 0.1, 0.001% H2SO4 treated plants and control plants were showing flower growth. On 42th day, acid treated Glycine max variety and control plants were showed seeds on plants. In Glycine max variety 0.1, 0.001% H2SO4, 0.1, 0.001% HNO3 treated plants were dead on 46th day and fungal growth was observed. The toxicological study was carried out on Glycine max plants exposed to 1% HNO3 cells were damaged more than 1% H2SO4. Leaf sections exposed to 0.001% HNO3 & H2SO4 showed less damaged of cells and pigmentation observed in entire slide when compare with control plant. The soil analysis was done to find microorganisms in HNO3 & H2SO4 treated Glycine max and control plants. No microorganism growth was observed in 1% HNO3 & H2SO4 but control plant showed microbial growth.

Keywords: Acid rain, Glycine max, HNO3 & H2SO4, Pigmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3378
716 Impact of Long Term Application of Municipal Solid Waste on Physicochemical and Microbial Parameters and Heavy Metal Distribution in Soils in Accordance to Its Agricultural Uses

Authors: Rinku Dhanker, Suman Chaudhary, Tanvi Bhatia, Sneh Goyal

Abstract:

Municipal Solid Waste (MSW), being a rich source of organic materials, can be used for agricultural applications as an important source of nutrients for soil and plants. This is also an alternative beneficial management practice for MSW generated in developing countries. In the present study, MSW treated soil samples from last four to six years at farmer’s field in Rohtak and Gurgaon states (Haryana, India) were collected. The samples were analyzed for all-important agricultural parameters and compared with the control untreated soil samples. The treated soil at farmer’s field showed increase in total N by 48 to 68%, P by 45.7 to 51.3%, and K by 60 to 67% compared to untreated soil samples. Application of sewage sludge at different sites led to increase in microbial biomass C by 60 to 68% compared to untreated soil. There was significant increase in total Cu, Cr, Ni, Fe, Pb, and Zn in all sewage sludge amended soil samples; however, concentration of all the metals were still below the current permitted (EU) limits. To study the adverse effect of heavy metals accumulation on various soil microbial activities, the sewage sludge samples (from wastewater treatment plant at Gurgaon) were artificially contaminated with heavy metal concentration above the EU limits. They were then applied to soil samples with different rates (0.5 to 4.0%) and incubated for 90 days under laboratory conditions. The samples were drawn at different intervals and analyzed for various parameters like pH, EC, total N, P, K, microbial biomass C, carbon mineralization, and diethylenetriaminepentaacetic acid (DTPA) exactable heavy metals. The results were compared to the uncontaminated sewage sludge. The increasing level of sewage sludge from 0.5 to 4% led to build of organic C and total N, P and K content at the early stages of incubation. But, organic C was decreased after 90 days because of decomposition of organic matter. Biomass production was significantly increased in both contaminated and uncontaminated sewage soil samples, but also led to slight increases in metal accumulation and their bioavailability in soil. The maximum metal concentrations were found in treatment with 4% of contaminated sewage sludge amendment.

Keywords: Heavy metals, municipal sewage sludge, sustainable agriculture, soil fertility, quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1257
715 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Cheima Ben Soltane, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: Feature Extraction, Speaker Modeling, Feature Matching, Mel Frequency Cepstrum Coefficient (MFCC), Gaussian mixture model (GMM), Vector Quantization (VQ), Linde-Buzo-Gray (LBG), Expectation Maximization (EM), pre-processing, Voice Activity Detection (VAD), Short Time Energy (STE), Background Noise Statistical Modeling, Closed-Set Tex-Independent Speaker Identification System (CISI).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
714 A New Method for Extracting Ocean Wave Energy Utilizing the Wave Shoaling Phenomenon

Authors: Shafiq R. Qureshi, Syed Noman Danish, Muhammad Saeed Khalid

Abstract:

Fossil fuels are the major source to meet the world energy requirements but its rapidly diminishing rate and adverse effects on our ecological system are of major concern. Renewable energy utilization is the need of time to meet the future challenges. Ocean energy is the one of these promising energy resources. Threefourths of the earth-s surface is covered by the oceans. This enormous energy resource is contained in the oceans- waters, the air above the oceans, and the land beneath them. The renewable energy source of ocean mainly is contained in waves, ocean current and offshore solar energy. Very fewer efforts have been made to harness this reliable and predictable resource. Harnessing of ocean energy needs detail knowledge of underlying mathematical governing equation and their analysis. With the advent of extra ordinary computational resources it is now possible to predict the wave climatology in lab simulation. Several techniques have been developed mostly stem from numerical analysis of Navier Stokes equations. This paper presents a brief over view of such mathematical model and tools to understand and analyze the wave climatology. Models of 1st, 2nd and 3rd generations have been developed to estimate the wave characteristics to assess the power potential. A brief overview of available wave energy technologies is also given. A novel concept of on-shore wave energy extraction method is also presented at the end. The concept is based upon total energy conservation, where energy of wave is transferred to the flexible converter to increase its kinetic energy. Squeezing action by the external pressure on the converter body results in increase velocities at discharge section. High velocity head then can be used for energy storage or for direct utility of power generation. This converter utilizes the both potential and kinetic energy of the waves and designed for on-shore or near-shore application. Increased wave height at the shore due to shoaling effects increases the potential energy of the waves which is converted to renewable energy. This approach will result in economic wave energy converter due to near shore installation and more dense waves due to shoaling. Method will be more efficient because of tapping both potential and kinetic energy of the waves.

Keywords: Energy Utilizing, Wave Shoaling Phenomenon

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2620
713 Maternal and Child Health Care: A Study among the Rongmeis of Manipur, India

Authors: Lorho Mary Maheo, Arundhati Maibam Devi

Abstract:

Background: Maternal and child health (MCH) cares are the health services provided to mothers and children. It includes the health promotion, preventive, curative and rehabilitation health care for mothers and children. Materials and method: The present study sample comprises of 208 women within the age range 15-69 years from two remote villages of Tamenglong District in Manipur. They were randomly chosen for assessing their health as well as the child’s health adopting an interview schedule method. Results: The findings of the study revealed that majority (80%) of the women have their first conception in their first year of married life. A decadal change has been observed with regard to the last pregnancy i.e., antenatal check-up, place of delivery as well as the service provider. However, irrespective of age of the women, home delivery is still preferred though very few are locally trained. Pre- and post-delivery resting period vary depending on the busy schedule of the agricultural works as the population under study is basically agriculturist. Postnatal care remains to be traditional as they are strongly associated with cultural beliefs and practices that continue to prevail in the studied community. Breast feeding practices such as colostrums given, initiation of breastfeeding, weaning was all taken into account.  Immunization of children has not reached the expected target owing to a variety of reasons. Maternal health care also includes use of birth control measures. The health status of women would invariably improve if family planning is meaningfully adopted. Only 10.1% of the women adopted the modern birth control implying its deep-rooted value attached to the children. Based on the self-assessment report on their health treatment a good number of the respondents resorted to self-medication even to the extent of buying allopathic medicine without a doctor’s prescription. One important finding from the study is the importance attributed to the traditional health care system which is easily affordable and accessible to the villagers. Conclusion: The overall condition of maternal and child care is way behind till now as no adequate/proper health services are available.

Keywords: Antenatal, breastfeeding, child health, maternal, Tamenglong District.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907
712 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: Deep learning, long-short-term memory, energy, renewable energy load forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
711 Educating the Educators: Interdisciplinary Approaches to Enhance Science Teaching

Authors: Denise Levy, Anna Lucia C. H. Villavicencio

Abstract:

In a rapid-changing world, science teachers face considerable challenges. In addition to the basic curriculum, there must be included several transversal themes, which demand creative and innovative strategies to be arranged and integrated to traditional disciplines. In Brazil, nuclear science is still a controversial theme, and teachers themselves seem to be unaware of the issue, most often perpetuating prejudice, errors and misconceptions. This article presents the authors’ experience in the development of an interdisciplinary pedagogical proposal to include nuclear science in the basic curriculum, in a transversal and integrating way. The methodology applied was based on the analysis of several normative documents that define the requirements of essential learning, competences and skills of basic education for all schools in Brazil. The didactic materials and resources were developed according to the best practices to improve learning processes privileging constructivist educational techniques, with emphasis on active learning process, collaborative learning and learning through research. The material consists of an illustrated book for students, a book for teachers and a manual with activities that can articulate nuclear science to different disciplines: Portuguese, mathematics, science, art, English, history and geography. The content counts on high scientific rigor and articulate nuclear technology with topics of interest to society in the most diverse spheres, such as food supply, public health, food safety and foreign trade. Moreover, this pedagogical proposal takes advantage of the potential value of digital technologies, implementing QR codes that excite and challenge students of all ages, improving interaction and engagement. The expected results include the education of the educators for nuclear science communication in a transversal and integrating way, demystifying nuclear technology in a contextualized and significant approach. It is expected that the interdisciplinary pedagogical proposal contributes to improving attitudes towards knowledge construction, privileging reconstructive questioning, fostering a culture of systematic curiosity and encouraging critical thinking skills.

Keywords: Science education, interdisciplinary learning, nuclear science; scientific literacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 758
710 Enhanced-Delivery Overlay Multicasting Scheme by Optimizing Bandwidth and Latency Discrepancy Ratios

Authors: Omar F. Hamad, T. Marwala

Abstract:

With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.

Keywords: Overlay multicast, Available bandwidth, Max-heapform overlay, Induced packet loss, Bandwidth-latency product, Node Gain Score (NGS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
709 Adaptive WiFi Fingerprinting for Location Approximation

Authors: Mohd Fikri Azli bin Abdullah, Khairul Anwar bin Kamarul Hatta, Esther Jeganathan

Abstract:

WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.

Keywords: Adaptive Repository, Artificial Neural Network, Location Estimation, Nearest Neighbour Euclidean Distance, WiFi RSSI Fingerprinting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3417
708 Suicide Conceptualization in Adolescents through Semantic Networks

Authors: K. P. Valdés García, E. I. Rodríguez Fonseca, L. G. Juárez Cantú

Abstract:

Suicide is a global, multidimensional and dynamic problem of mental health, which requires a constant study for its understanding and prevention. When research of this phenomenon is done, it is necessary to consider the different characteristics it may have because of the individual and sociocultural variables, the importance of this consideration is related to the generation of effective treatments and interventions. Adolescents are a vulnerable population due to the characteristics of the development stage. The investigation was carried out with the objective of identifying and describing the conceptualization of adolescents of suicide, and in this process, we find possible differences between men and women. The study was carried out in Saltillo, Coahuila, Mexico. The sample was composed of 418 volunteer students aged between 11 and 18 years. The ethical aspects of the research were reviewed and considered in all the processes of the investigation with the participants, their parents and the schools to which they belonged, psychological attention was offered to the participants and preventive workshops were carried in the educational institutions. Natural semantic networks were the instrument used, since this hybrid method allows to find and analyze the social concept of a phenomenon; in this case, the word suicide was used as an evocative stimulus and participants were asked to evoke at least five words and a maximum 10 that they thought were related to suicide, and then hierarchize them according to the closeness with the construct. The subsequent analysis was carried with Excel, yielding the semantic weights, affective loads and the distances between each of the semantic fields established according to the words reported by the subjects. The results showed similarities in the conceptualization of suicide in adolescents, men and women. Seven semantic fields were generated; the words were related in the discourse analysis: 1) death, 2) possible triggering factors, 3) associated moods, 4) methods used to carry it out, 5) psychological symptomatology that could affect, 6) words associated with a rejection of suicide, and finally, 7) specific objects to carry it out. One of the necessary aspects to consider in the investigations of complex issues such as suicide is to have a diversity of instruments and techniques that adjust to the characteristics of the population and that allow to understand the phenomena from the social constructs and not only theoretical. The constant study of suicide is a pressing need, the loss of a life from emotional difficulties that can be solved through psychiatry and psychological methods requires governments and professionals to pay attention and work with the risk population.

Keywords: Adolescents, semantic networks, speech analysis, suicide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 710
707 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

Authors: Kyomin Lee, Joohee Kim, Sangho Kang

Abstract:

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

Keywords: Characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
706 The Experimental and Numerical Analysis of the Joining Processes for Air Conditioning Systems

Authors: M.St. Węglowski, D. Miara, S. Błacha, J. Dworak, J. Rykała, K. Kwieciński, J. Pikuła, G. Ziobro, A. Szafron, P. Zimierska-Nowak, M. Richert, P. Noga

Abstract:

In the paper the results of welding of car’s air-conditioning elements are presented. These systems based on, mainly, the environmental unfriendly refrigerants. Thus, the producers of cars will have to stop using traditional refrigerant and to change it to carbon dioxide (R744). This refrigerant is environmental friendly. However, it should be noted that the air condition system working with R744 refrigerant operates at high temperature (up to 150 °C) and high pressure (up to 130 bar). These two parameters are much higher than for other refrigerants. Thus new materials, design as well as joining technologies are strongly needed for these systems. AISI 304 and 316L steels as well as aluminium alloys 5xxx are ranked among the prospective materials. As a joining process laser welding, plasma welding, electron beam welding as well as high rotary friction welding can be applied. In the study, the metallographic examination based on light microscopy as well as SEM was applied to estimate the quality of welded joints. The analysis of welding was supported by numerical modelling based on Sysweld software. The results indicated that using laser, plasma and electron beam welding, it is possible to obtain proper quality of welds in stainless steel. Moreover, high rotary friction welding allows to guarantee the metallic continuity in the aluminium welded area. The metallographic examination revealed that the grain growth in the heat affected zone (HAZ) in laser and electron beam welded joints were not observed. It is due to low heat input and short welding time. The grain growth and subgrains can be observed at room temperature when the solidification mode is austenitic. This caused low microstructural changes during solidification. The columnar grain structure was found in the weld metal. Meanwhile, the equiaxed grains were detected in the interface. The numerical modelling of laser welding process allowed to estimate the temperature profile in the welded joint as well as predicts the dimensions of welds. The agreement between FEM analysis and experimental data was achieved.  

Keywords: Car’s air–conditioning, microstructure, numerical modelling, welding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 754
705 Transformation of Aluminum Unstable Oxyhydroxides in Ultrafine α-Al2O3 in Presence of Various Seeds

Authors: T. Kuchukhidze, N. Jalagonia, Z. Phachulia, R. Chedia

Abstract:

Ceramic obtained on the base of aluminum oxide has wide application range, because it has unique properties, for example, wear-resistance, dielectric characteristics, and exploitation ability at high temperatures and in corrosive atmosphere. Low temperature synthesis of α-Al2O3 is energo-economical process and it is topical for developing technologies of corundum ceramics fabrication. In the present work possibilities of low temperature transformation of oxyhydroxides in α-Al2O3, during the presence of small amount of rare–earth elements compounds (also Th, Re), have been discussed. Aluminum unstable oxyhydroxides have been obtained by hydrolysis of aluminium isopropoxide, nitrates, sulphate, and chloride in alkaline environment at 80-90ºC temperatures. β-Al(OH)3 has been received from aluminum powder by ultrasonic development. Drying of oxyhydroxide sol has been conducted with presence of various types seeds, which amount reaches 0,1-0,2% (mas). Neodymium, holmium, thorium, lanthanum, cerium, gadolinium, disprosium nitrates and rhenium carbonyls have been used as seeds and they have been added to the sol specimens in amount of 0.1-0.2% (mas) calculated on metals. Annealing of obtained gels is carried out at 70– 1100ºC for 2 hrs. The same specimen transforms in α-Al2O3 at 1100ºC. At this temperature in case of presence of lanthanum and gadolinium transformation takes place by 70-85%. In case of presence of thorium stabilization of γ-and θ-phases takes place. It is established, that thorium causes inhibition of α-phase generation at 1100ºC, and at the time when in all other doped specimens α-phase is generated at lower temperatures (1000-1050ºC). Synthesis of various type compounds and simultaneous consolidation has developed in the furnace of OXY-GON. Composite materials containing oxide and non-oxide components close to theoretical data have been obtained in this furnace respectively. During the work the following devices have been used: X-ray diffractometer DRON-3M (Cu-Kα, Ni filter, 2º/min), High temperature vacuum furnace OXY-GON, electronic scanning microscopes Nikon ECLIPSE LV 150, NMM-800TRF, planetary mill Pulverisette 7 premium line, SHIMADZU Dynamic Ultra Micro Hardness Tester, DUH-211S, Analysette 12 Dyna sizer.

Keywords: α-Alumina, combustion, consolidation, phase transformation, seeding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4047
704 Nanoparticles-Protein Hybrid Based Magnetic Liposome

Authors: Amlan Kumar Das, Avinash Marwal, Vikram Pareek

Abstract:

Liposome plays an important role in medical and pharmaceutical science as e.g. nano scale drug carriers. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment. Magnet-driven liposome used for the targeted delivery of drugs to organs and tissues. These liposome preparations contain encapsulated drug components and finely dispersed magnetic particles. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment that are generated in vitro. These are useful in terms of biocompatibility, biodegradability, and low toxicity, and can control biodistribution by changing the size, lipid composition, and physical characteristics. Furthermore, liposomes can entrap both hydrophobic and hydrophilic drugs and are able to continuously release the entrapped substrate, thus being useful drug carriers. Magnetic liposomes (MLs) are phospholipid vesicles that encapsulate magneticor paramagnetic nanoparticles. They are applied as contrast agents for magnetic resonance imaging (MRI). The biological synthesis of nanoparticles using plant extracts plays an important role in the field of nanotechnology. Green-synthesized magnetite nanoparticles-protein hybrid has been produced by treating Iron (III) / Iron (II) chloride with the leaf extract of Datura inoxia. The phytochemicals present in the leaf extracts act as a reducing as well stabilizing agents preventing agglomeration, which include flavonoids, phenolic compounds, cardiac glycosides, proteins and sugars. The magnetite nanoparticles-protein hybrid has been trapped inside the aqueous core of the liposome prepared by reversed phase evaporation (REV) method using oleic and linoleic acid which has been shown to be driven under magnetic field confirming the formation magnetic liposome (ML). Chemical characterization of stealth magnetic liposome has been performed by breaking the liposome and release of magnetic nanoparticles. The presence iron has been confirmed by colour complex formation with KSCN and UV-Vis study using spectrophotometer Cary 60, Agilent. This magnet driven liposome using nanoparticles-protein hybrid can be a smart vesicles for the targeted drug delivery.

Keywords: Nanoparticles-Protein Hybrid, Magnetic Liposome.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2989
703 Identification of the Best Blend Composition of Natural Rubber-High Density Polyethylene Blends for Roofing Applications

Authors: W. V. W. H. Wickramaarachchi, S. Walpalage, S. M. Egodage

Abstract:

Thermoplastic elastomer (TPE) is a multifunctional polymeric material which possesses a combination of excellent properties of parent materials. Basically, TPE has a rubber phase and a thermoplastic phase which gives processability as thermoplastics. When the rubber phase is partially or fully crosslinked in the thermoplastic matrix, TPE is called as thermoplastic elastomer vulcanizate (TPV). If the rubber phase is non-crosslinked, it is called as thermoplastic elastomer olefin (TPO). Nowadays TPEs are introduced into the commercial market with different products. However, the application of TPE as a roofing material is limited. Out of the commercially available roofing products from different materials, only single ply roofing membranes and plastic roofing sheets are produced from rubbers and plastics. Natural rubber (NR) and high density polyethylene (HDPE) are used in various industrial applications individually with some drawbacks. Therefore, this study was focused to develop both TPO and TPV blends from NR and HDPE at different compositions and then to identify the best blend composition to use as a roofing material. A series of blends by varying NR loading from 10 wt% to 50 wt%, at 10 wt% intervals, were prepared using a twin screw extruder. Dicumyl peroxide was used as a crosslinker for TPV. The standard properties for a roofing material like tensile properties tear strength, hardness, impact strength, water absorption, swell/gel analysis and thermal characteristics of the blends were investigated. Change of tensile strength after exposing to UV radiation was also studied. Tensile strength, hardness, tear strength, melting temperature and gel content of TPVs show higher values compared to TPOs at every loading studied, while water absorption and swelling index show lower values, suggesting TPVs are more suitable than TPOs for roofing applications. Most of the optimum properties were shown at 10/90 (NR/HDPE) composition. However, high impact strength and gel content were shown at 20/80 (NR/HDPE) composition. Impact strength, as being an energy absorbing property, is the most important for a roofing material in order to resist impact loads. Therefore, 20/80 (NR/HDPE) is identified as the best blend composition. UV resistance and other properties required for a roofing material could be achieved by incorporating suitable additives to TPVs.

Keywords: Thermoplastic elastomer, natural rubber, high density polyethylene, roofing material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
702 The Effects of Drought and Nitrogen on Soybean (Glycine max (L.) Merrill) Physiology and Yield

Authors: Oqba Basal, András Szabó

Abstract:

Legume crops are able to fix atmospheric nitrogen by the symbiotic relation with specific bacteria, which allows the use of the mineral nitrogen-fertilizer to be reduced, or even excluded, resulting in more profit for the farmers and less pollution for the environment. Soybean (Glycine max (L.) Merrill) is one of the most important legumes with its high content of both protein and oil. However, it is recommended to combine the two nitrogen sources under stress conditions in order to overcome its negative effects. Drought stress is one of the most important abiotic stresses that increasingly limits soybean yields. A precise rate of mineral nitrogen under drought conditions is not confirmed, as it depends on many factors; soybean yield-potential and soil-nitrogen content to name a few. An experiment was conducted during 2017 growing season in Debrecen, Hungary to investigate the effects of nitrogen source on the physiology and the yield of the soybean cultivar 'Boglár'. Three N-fertilizer rates including no N-fertilizer (0 N), 35 kg ha-1 of N-fertilizer (35 N) and 105 kg ha-1 of N-fertilizer (105 N) were applied under three different irrigation regimes; severe drought stress (SD), moderate drought stress (MD) and control with no drought stress (ND). Half of the seeds in each treatment were pre-inoculated with Bradyrhizobium japonicum inoculant. The overall results showed significant differences associated with fertilization and irrigation, but not with inoculation. Increasing N rate was mostly accompanied with increased chlorophyll content and leaf area index, whereas it positively affected the plant height only when the drought was waived off. Plant height was the lowest under severe drought, regardless of inoculation and N-fertilizer application and rate. Inoculation increased the yield when there was no drought, and a low rate of N-fertilizer increased the yield furthermore; however, the high rate of N-fertilizer decreased the yield to a level even less than the inoculated control. On the other hand, the yield of non-inoculated plants increased as the N-fertilizer rate increased. Under drought conditions, adding N-fertilizer increased the yield of the non-inoculated plants compared to their inoculated counterparts; moreover, the high rate of N-fertilizer resulted in the best yield. Regardless of inoculation, the mean yield of the three fertilization rates was better when the water amount increased. It was concluded that applying N-fertilizer to provide the nitrogen needed by soybean plants, with the absence of N2-fixation process, is very important. Moreover, adding relatively high rate of N-fertilizer is very important under severe drought stress to alleviate the drought negative effects. Further research to recommend the best N-fertilizer rate to inoculated soybean under drought stress conditions should be executed.

Keywords: Drought stress, inoculation, N-fertilizer, soybean physiology, yield.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 750
701 Antibiotic Prescribing in the Acute Care in Iraq

Authors: Ola A. Nassr, Ali M. Abd Alridha, Rua A. Naser, Rasha S. Abbas

Abstract:

Background: Excessive and inappropriate use of antimicrobial agents among hospitalized patients remains an important patient safety and public health issue worldwide. Not only does this behavior incur unnecessary cost but it is also associated with increased morbidity and mortality. The objective of this study is to obtain an insight into the prescribing patterns of antibiotics in surgical and medical wards, to help identify a scope for improvement in service delivery. Method: A simple point prevalence survey included a convenience sample of 200 patients admitted to medical and surgical wards in a government teaching hospital in Baghdad between October 2017 and April 2018. Data were collected by a trained pharmacy intern using a standardized form. Patient’s demographics and details of the prescribed antibiotics, including dose, frequency of dosing and route of administration, were reported. Patients were included if they had been admitted at least 24 hours before the survey. Patients under 18 years of age, having a diagnosis of cancer or shock, or being admitted to the intensive care unit, were excluded. Data were checked and entered by the authors into Excel and were subjected to frequency analysis, which was carried out on anonymized data to protect patient confidentiality. Results: Overall, 88.5% of patients (n=177) received 293 antibiotics during their hospital admission, with a small variation between wards (80%-97%). The average number of antibiotics prescribed per patient was 1.65, ranging from 1.3 for medical patients to 1.95 for surgical patients. Parenteral third-generation cephalosporins were the most commonly prescribed at a rate of 54.3% (n=159) followed by nitroimidazole 29.4% (n=86), quinolones 7.5% (n=22) and macrolides 4.4% (n=13), while carbapenems and aminoglycosides were the least prescribed together accounting for only 4.4% (n=13). The intravenous route was the most common route of administration, used for 96.6% of patients (n=171). Indications were reported in only 63.8% of cases. Culture to identify pathogenic organisms was employed in only 0.5% of cases. Conclusion: Broad-spectrum antibiotics are prescribed at an alarming rate. This practice may provoke antibiotic resistance and adversely affect the patient outcome. Implementation of an antibiotic stewardship program is warranted to enhance the efficacy, safety and cost-effectiveness of antimicrobial agents.

Keywords: Acute care, antibiotic misuse, Iraq, prescribing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 924
700 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: Time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
699 Development of a Real-Time Simulink Based Robotic System to Study Force Feedback Mechanism during Instrument-Object Interaction

Authors: Jaydip M. Desai, Antonio Valdevit, Arthur Ritter

Abstract:

Robotic surgery is used to enhance minimally invasive surgical procedure. It provides greater degree of freedom for surgical tools but lacks of haptic feedback system to provide sense of touch to the surgeon. Surgical robots work on master-slave operation, where user is a master and robotic arms are the slaves. Current, surgical robots provide precise control of the surgical tools, but heavily rely on visual feedback, which sometimes cause damage to the inner organs. The goal of this research was to design and develop a realtime Simulink based robotic system to study force feedback mechanism during instrument-object interaction. Setup includes three VelmexXSlide assembly (XYZ Stage) for three dimensional movement, an end effector assembly for forceps, electronic circuit for four strain gages, two Novint Falcon 3D gaming controllers, microcontroller board with linear actuators, MATLAB and Simulink toolboxes. Strain gages were calibrated using Imada Digital Force Gauge device and tested with a hard-core wire to measure instrument-object interaction in the range of 0-35N. Designed Simulink model successfully acquires 3D coordinates from two Novint Falcon controllers and transfer coordinates to the XYZ stage and forceps. Simulink model also reads strain gages signal through 10-bit analog to digital converter resolution of a microcontroller assembly in real time, converts voltage into force and feedback the output signals to the Novint Falcon controller for force feedback mechanism. Experimental setup allows user to change forward kinematics algorithms to achieve the best-desired movement of the XYZ stage and forceps. This project combines haptic technology with surgical robot to provide sense of touch to the user controlling forceps through machine-computer interface.

Keywords: Haptic feedback, MATLAB, Simulink, Strain Gage, Surgical Robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3160
698 Thai Halal Products Brand Tips

Authors: Pibool Waijittragum

Abstract:

The purpose of this research is to analyze the marketing strategies of Thai Halal products which related to the way of life for Thai Muslims. The expected benefit is the marketing strategy for brand building process for Halal products in Thailand. 4 elements of marketing strategies which necessary for the brand identity creation is the research framework: consists of Attributes, Benefits, Values and Personality. The research methodology was applied using qualitative and quantitative; 19 marketing experts with dynamic roles in Thai consumer products were interviewed. In addition, a field survey of 122 Thai Muslims selected from 175 Muslim communities in Bangkok was studied. Data analysis will be according to 5 categories of Thai Halal product: 1) Meat 2) Vegetable and Fruits 3) Instant foods and Garnishing ingredient 4) Beverages, Desserts and Snacks 5) Hygienic daily products; such as soap, shampoo and body lotion. The results will explain some suitable representation in the marketing strategies of Thai Halal products as are: 1) Benefit; the characteristics of the product with its benefit. Consumers will purchase this product with the reason of; it is beneficial nutrients product, there are no toxic or chemical residues. Fresh and clean materials 2) Attribute; the exterior images that attract to consumer. Consumers will purchase this product with the reason of; there is a standard proof mark, food and drug secure proof mark and Halal products mark. Packaging and its materials should be draw attention. Use an attractive graphic. Use outstanding images of product, material or ingredients. 3) Value; the value of products that affect to consumers perception; it is healthy products. Accumulate quality of life. It is a product of expertise, manufacturing of research result. Consumers are important. It’s sincere, honest and reliable to all. 4) Personality; reflection of consumers thought. The Personality feedback to them after they were consumes this product; they are health care persons. They are the rational person, moral person, justice person and thoughtful person like a progressive thinking.

Keywords: Marketing strategies, Product identity, Branding, Thai Halal products.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218