Search results for: Quasi-Newton iteration procedure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 993

Search results for: Quasi-Newton iteration procedure

93 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449
92 Analysis and Control of Camera Type Weft Straightener

Authors: Jae-Yong Lee, Gyu-Hyun Bae, Yun-Soo Chung, Dae-Sub Kim, Jae-Sung Bae

Abstract:

In general, fabric is heat-treated using a stenter machine in order to dry and fix its shape. It is important to shape before the heat treatment because it is difficult to revert back once the fabric is formed. To produce the product of right shape, camera type weft straightener has been applied recently to capture and process fabric images quickly. It is more powerful in determining the final textile quality rather than photo-sensor. Positioning in front of a stenter machine, weft straightener helps to spread fabric evenly and control the angle between warp and weft constantly as right angle by handling skew and bow rollers. To process this tricky procedure, the structural analysis should be carried out in advance, based on which, its control technology can be drawn. A structural analysis is to figure out the specific contact/slippage characteristics between fabric and roller. We already examined the applicability of camera type weft straightener to plain weave fabric and found its possibility and the specific working condition of machine and rollers. In this research, we aimed to explore another applicability of camera type weft straightener. Namely, we tried to figure out camera type weft straightener can be used for fabrics. To find out the optimum condition, we increased the number of rollers. The analysis is done by ANSYS software using Finite Element Analysis method. The control function is demonstrated by experiment. In conclusion, the structural analysis of weft straightener is done to identify a specific characteristic between roller and fabrics. The control of skew and bow roller is done to decrease the error of the angle between warp and weft. Finally, it is proved that camera type straightener can also be used for the special fabrics.

Keywords: Camera type weft straightener, structure analysis, control, skew and bow roller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
91 Soil-Structure Interaction Models for the Reinforced Foundation System: A State-of-the-Art Review

Authors: Ashwini V. Chavan, Sukhanand S. Bhosale

Abstract:

Challenges of weak soil subgrade are often resolved either by stabilization or reinforcing it. However, it is also practiced to reinforce the granular fill to improve the load-settlement behavior of it over weak soil strata. The inclusion of reinforcement in the engineered granular fill provided a new impetus for the development of enhanced Soil-Structure Interaction (SSI) models, also known as mechanical foundation models or lumped parameter models. Several researchers have been working in this direction to understand the mechanism of granular fill-reinforcement interaction and the response of weak soil under the application of load. These models have been developed by extending available SSI models such as the Winkler Model, Pasternak Model, Hetenyi Model, Kerr Model etc., and are helpful to visualize the load-settlement behavior of a physical system through 1-D and 2-D analysis considering beam and plate resting on the foundation, respectively. Based on the literature survey, these models are categorized as ‘Reinforced Pasternak Model,’ ‘Double Beam Model,’ ‘Reinforced Timoshenko Beam Model,’ and ‘Reinforced Kerr Model’. The present work reviews the past 30+ years of research in the field of SSI models for reinforced foundation systems, presenting the conceptual development of these models systematically and discussing their limitations. A flow-chart showing procedure for compution of deformation and mobilized tension is also incorporated in the paper. Special efforts are taken to tabulate the parameters and their significance in the load-settlement analysis, which may be helpful in future studies for the comparison and enhancement of results and findings of physical models. 

Keywords: geosynthetics, mathematical modeling, reinforced foundation, soil-structure interaction, ground improvement, soft soil

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 653
90 A Case Study on Appearance Based Feature Extraction Techniques and Their Susceptibility to Image Degradations for the Task of Face Recognition

Authors: Vitomir Struc, Nikola Pavesic

Abstract:

Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.

Keywords: Biometrics, face recognition, appearance based methods, image degradations, the XM2VTS database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2284
89 Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging

Authors: Sundararajan Sangeetha, Joseph Jesu Christopher, Swaminathan Ramakrishnan

Abstract:

In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.

Keywords: Image processing, planar radiographs, trabecular bone and wavelet analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493
88 Development and Validation of a UPLC Method for the Determination of Albendazole Residues on Pharmaceutical Manufacturing Equipment Surfaces

Authors: R. S. Chandan, M. Vasudevan, Deecaraman, B. M. Gurupadayya

Abstract:

In Pharmaceutical industries, it is very important to remove drug residues from the equipment and areas used. The cleaning procedure must be validated, so special attention must be devoted to the methods used for analysis of trace amounts of drugs. A rapid, sensitive and specific reverse phase ultra performance liquid chromatographic (UPLC) method was developed for the quantitative determination of Albendazole in cleaning validation swab samples. The method was validated using an ACQUITY HSS C18, 50 x 2.1mm, 1.8μ column with a isocratic mobile phase containing a mixture of 1.36g of Potassium dihydrogenphosphate in 1000mL MilliQ water, 2mL of triethylamine and pH adjusted to 2.3 ± 0.05 with ortho-phosphoric acid, Acetonitrile and Methanol (50:40:10 v/v). The flow rate of the mobile phase was 0.5 mL min-1 with a column temperature of 350C and detection wavelength at 254nm using PDA detector. The injection volume was 2µl. Cotton swabs, moisten with acetonitrile were used to remove any residue of drug from stainless steel, teflon, rubber and silicon plates which mimic the production equipment surface and the mean extraction-recovery was found to be 91.8. The selected chromatographic condition was found to effectively elute Albendazole with retention time of 0.67min. The proposed method was found to be linear over the range of 0.2 to 150µg/mL and correlation coefficient obtained is 0.9992. The proposed method was found to be accurate, precise, reproducible and specific and it can also be used for routine quality control analysis of these drugs in biological samples either alone or in combined pharmaceutical dosage forms.

Keywords: Cleaning validation, Albendazole, residues, swab analysis, UPLC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3105
87 Development of Circulating Support Environment of Multilingual Medical Communication using Parallel Texts for Foreign Patients

Authors: Mai Miyabe, Taku Fukushima, Takashi Yoshino, Aguri Shigeno

Abstract:

The need for multilingual communication in Japan has increased due to an increase in the number of foreigners in the country. When people communicate in their nonnative language, the differences in language prevent mutual understanding among the communicating individuals. In the medical field, communication between the hospital staff and patients is a serious problem. Currently, medical translators accompany patients to medical care facilities, and the demand for medical translators is increasing. However, medical translators cannot necessarily provide support, especially in cases in which round-the-clock support is required or in case of emergencies. The medical field has high expectations from information technology. Hence, a system that supports accurate multilingual communication is required. Despite recent advances in machine translation technology, it is very difficult to obtain highly accurate translations. We have developed a support system called M3 for multilingual medical reception. M3 provides support functions that aid foreign patients in the following respects: conversation, questionnaires, reception procedures, and hospital navigation; it also has a Q&A function. Users can operate M3 using a touch screen and receive text-based support. In addition, M3 uses accurate translation tools called parallel texts to facilitate reliable communication through conversations between the hospital staff and the patients. However, if there is no parallel text that expresses what users want to communicate, the users cannot communicate. In this study, we have developed a circulating support environment for multilingual medical communication using parallel texts. The proposed environment can circulate necessary parallel texts through the following procedure: (1) a user provides feedback about the necessary parallel texts, following which (2) these parallel texts are created and evaluated.

Keywords: multilingual medical communication, parallel texts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
86 Combined Effect of Heat Stimulation and Delayed Addition of Superplasticizer with Slag on Fresh and Hardened Property of Mortar

Authors: Faraidoon Rahmanzai, Mizuki Takigawa, Yu Bomura, Shigeyuki Date

Abstract:

To obtain the high quality and essential workability of mortar, different types of superplasticizers are used. The superplasticizers are the chemical admixture used in the mix to improve the fluidity of mortar. Many factors influenced the superplasticizer to disperse the cement particle in the mortar. Nature and amount of replaced cement by slag, mixing procedure, delayed addition time, and heat stimulation technique of superplasticizer cause the varied effect on the fluidity of the cementitious material. In this experiment, the superplasticizers were heated for 1 hour under 60 °C in a thermostatic chamber. Furthermore, the effect of delayed addition time of heat stimulated superplasticizers (SP) was also analyzed. This method was applied to two types of polycarboxylic acid based ether SP (precast type superplasticizer (SP2) and ready-mix type superplasticizer (SP1)) in combination with a partial replacement of normal Portland cement with blast furnace slag (BFS) with 30% w/c ratio. On the other hands, the fluidity, air content, fresh density, and compressive strength for 7 and 28 days were studied. The results indicate that the addition time and heat stimulation technique improved the flow and air content, decreased the density, and slightly decreased the compressive strength of mortar. Moreover, the slag improved the flow of mortar by increasing the amount of slag, and the effect of external temperature of SP on the flow of mortar was decreased. In comparison, the flow of mortar was improved on 5-minute delay for both kinds of SP, but SP1 has improved the flow in all conditions. Most importantly, the transition points in both types of SP appear to be the same, at about 5±1 min.  In addition, the optimum addition time of SP to mortar should be in this period.

Keywords: Combined effect, delayed addition, heat stimulation, flow of mortar.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 847
85 Persuasive Communication on Social Egg Freezing in California from a Framing Theory Perspective

Authors: Leila Mohammadi

Abstract:

This paper presents the impact of persuasive communication implemented by fertility clinics websites, and how this information influences women at their decision-making for undertaking this procedure. The influential factors for women decisions to do social egg freezing (SEF) are analyzed from a framing theory perspective, with a specific focus on the impact of persuasive information on women’s decision making. This study follows a quantitative approach. A two-phase survey has been conducted to examine the interest rate to undertake SEF. In the first phase, a questionnaire was available during a month (May 2015) to women to answer whether or not they knew enough information of this process, with a total of 230 answers. The second phase took place in the two last weeks of July 2015. All the respondents were invited to a seminars called ‘All about egg freezing’ and afretwards they were requested to answer the second questionnaire. After the seminar, in which they were given an extensive amount of information about egg freezing, a total of 115 women replied the questionnaire. The collected data during this process were analyzed using descriptive statistics. Most of the respondents changed their opinion in the second questionaire which was after receiving information. Although in the first questionnaire their self-evaluation of having knowledge about this process and the implemented technologies was very high, they realized that they still need to access more information from different sources in order to be able to make a decision. The study reached the conclusion that persuasive and framed information by clinics would affect the decisions of these women. Despite the reasons women have to do egg freezing and their motivations behind it, providing people necessary information and unprejudiced data about this process (such as its positive and negative aspects, requirements, suppositions, possibilities and consequences) would help them to make a more precise and reasonable decision about what they are buying.

Keywords: Decision making, fertility clinics, framing theory, persuasive information, social egg freezing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
84 Patterns of Malignant and Benign Breast Lesions in Hail Region: A Retrospective Study at King Khalid Hospital

Authors: Laila Seada, Ashraf Ibrahim, Amjad Al Shammari

Abstract:

Background and Objectives: Breast carcinoma is the most common cancer of females in Hail region, accounting for 31% of all diagnosed cancer cases followed by thyroid carcinoma (25%) and colorectal carcinoma (13%). Methods: In the present retrospective study, all cases of breast lesions received at the histopathology department in King Khalid Hospital, Hail, during the period from May 2011 to April 2016 have been retrieved from department files. For all cases, a trucut biopsy, lumpectomy, or modified radical mastectomy was available for histopathologic diagnosis, while 105/140 (75%) had, as well, preoperative fine needle aspirates (FNA). Results: 49 cases out of 140 (35%) breast lesions were carcinomas: 44/49 (89.75%) was invasive ductal, 2/49(4.1%) invasive lobular carcinomas, 1/49(2.05%) intracystic low grade papillary carcinoma and 2/49 (4.1%) ductal carcinoma in situ (DCIS). Mean age for malignant cases was 45.06 (+/-10.58): 32.6% were below the age of 40 and 30.6 below 50 years, 18.3% below 60 and 16.3% below 70 years. For the benign group, mean age was 32.52 (+/10.5) years. Benign lesions were in order of frequency: 34 fibroadenomas, 14 fibrocystic disease, 12 chronic mastitis, five granulomatous mastitis, three intraductal papillomas, and three benign phyllodes tumor. Tubular adenoma, lipoma, skin nevus, pilomatrixoma, and breast reduction specimens constituted the remaining specimens. Conclusion: Breast lesions are common in our series and invasive carcinoma accounts for more than 1/3rd of the lumps, with 63.2% incidence in pre-menopausal ladies, below the age of 50 years. FNA as a non-invasive procedure, proved to be an effective tool in diagnosing both benign and malignant/suspicious breast lumps and should continue to be used as a first assessment line of palpable breast masses.

Keywords: Age incidence, breast carcinoma, fine needle aspiration, Hail Region.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 937
83 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: Hip joint centre, motion capture, soft tissue artefact, ultrasound depth measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2861
82 Work System Design in Productivity for Small and Medium Enterprises: A Systematic Literature Review

Authors: S. Halofaki, D. R. Seenivasagam, P. Bijay, K. Singh, R. Ananthanarayanan

Abstract:

This comprehensive literature review delves into the effects and applications of work system design on the performance of Small and Medium-sized Enterprises (SMEs). The review process involved three independent reviewers who screened 514 articles through a four-step procedure: removing duplicates, assessing keyword relevance, evaluating abstract content, and thoroughly reviewing full-text articles. Various criteria such as relevance to the research topic, publication type, study type, language, publication date, and methodological quality were employed to exclude certain publications. A portion of articles that met the predefined inclusion criteria were included as a result of this systematic literature review. These selected publications underwent data extraction and analysis to compile insights regarding the influence of work system design on SME performance. Additionally, the quality of the included studies was assessed, and the level of confidence in the body of evidence was established. The findings of this review shed light on how work system design impacts SME performance, emphasizing important implications and applications. Furthermore, the review offers suggestions for further research in this critical area and summarizes the current state of knowledge in the field. Understanding the intricate connections between work system design and SME success can enhance operational efficiency, employee engagement, and overall competitiveness for SMEs. This comprehensive examination of the literature contributes significantly to both academic research and practical decision-making for SMEs.

Keywords: Literature review, productivity, small and medium-sized enterprises, SMEs, work system design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 104
81 A Set Theory Based Factoring Technique and Its Use for Low Power Logic Design

Authors: Padmanabhan Balasubramanian, Ryuta Arisaka

Abstract:

Factoring Boolean functions is one of the basic operations in algorithmic logic synthesis. A novel algebraic factorization heuristic for single-output combinatorial logic functions is presented in this paper and is developed based on the set theory paradigm. The impact of factoring is analyzed mainly from a low power design perspective for standard cell based digital designs in this paper. The physical implementation of a number of MCNC/IWLS combinational benchmark functions and sub-functions are compared before and after factoring, based on a simple technology mapping procedure utilizing only standard gate primitives (readily available as standard cells in a technology library) and not cells corresponding to optimized complex logic. The power results were obtained at the gate-level by means of an industry-standard power analysis tool from Synopsys, targeting a 130nm (0.13μm) UMC CMOS library, for the typical case. The wire-loads were inserted automatically and the simulations were performed with maximum input activity. The gate-level simulations demonstrate the advantage of the proposed factoring technique in comparison with other existing methods from a low power perspective, for arbitrary examples. Though the benchmarks experimentation reports mixed results, the mean savings in total power and dynamic power for the factored solution over a non-factored solution were 6.11% and 5.85% respectively. In terms of leakage power, the average savings for the factored forms was significant to the tune of 23.48%. The factored solution is expected to better its non-factored counterpart in terms of the power-delay product as it is well-known that factoring, in general, yields a delay-efficient multi-level solution.

Keywords: Factorization, Set theory, Logic function, Standardcell based design, Low power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
80 Perceptual and Ultrasound Articulatory Training Effects on English L2 Vowels Production by Italian Learners

Authors: I. Sonia d’Apolito, Bianca Sisinni, Mirko Grimaldi, Barbara Gili Fivela

Abstract:

The American English contrast /ɑ-ʌ/ (cop-cup) is difficult to be produced by Italian learners since they realize L2-/ɑ-ʌ/ as L1-/ɔ-a/ respectively, due to differences in phonetic-phonological systems and also in grapheme-to-phoneme conversion rules. In this paper, we try to answer the following research questions: Can a short training improve the production of English /ɑ-ʌ/ by Italian learners? Is a perceptual training better than an articulatory (ultrasound - US) training? Thus, we compare a perceptual training with an US articulatory one to observe: 1) the effects of short trainings on L2-/ɑ-ʌ/ productions; 2) if the US articulatory training improves the pronunciation better than the perceptual training. In this pilot study, 9 Salento-Italian monolingual adults participated: 3 subjects performed a 1-hour perceptual training (ES-P); 3 subjects performed a 1-hour US training (ES-US); and 3 control subjects did not receive any training (CS). Verbal instructions about the phonetic properties of L2-/ɑ-ʌ/ and L1-/ɔ-a/ and their differences (representation on F1-F2 plane) were provided during both trainings. After these instructions, the ES-P group performed an identification training based on the High Variability Phonetic Training procedure, while the ES-US group performed the articulatory training, by means of US video of tongue gestures in L2-/ɑ-ʌ/ production and dynamic view of their own tongue movements and position using a probe under their chin. The acoustic data were analyzed and the first three formants were calculated. Independent t-tests were run to compare: 1) /ɑ-ʌ/ in pre- vs. post-test respectively; /ɑ-ʌ/ in pre- and post-test vs. L1-/a-ɔ/ respectively. Results show that in the pre-test all speakers realize L2-/ɑ-ʌ/ as L1-/ɔ-a/ respectively. Contrary to CS and ES-P groups, the ES-US group in the post-test differentiates the L2 vowels from those produced in the pre-test as well as from the L1 vowels, although only one ES-US subject produces both L2 vowels accurately. The articulatory training seems more effective than the perceptual one since it favors the production of vowels in the correct direction of L2 vowels and differently from the similar L1 vowels.

Keywords: L2 vowel production, perceptual training, articulatory training, ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1020
79 Estimating Saturated Hydraulic Conductivity from Soil Physical Properties using Neural Networks Model

Authors: B. Ghanbarian-Alavijeh, A.M. Liaghat, S. Sohrabi

Abstract:

Saturated hydraulic conductivity is one of the soil hydraulic properties which is widely used in environmental studies especially subsurface ground water. Since, its direct measurement is time consuming and therefore costly, indirect methods such as pedotransfer functions have been developed based on multiple linear regression equations and neural networks model in order to estimate saturated hydraulic conductivity from readily available soil properties e.g. sand, silt, and clay contents, bulk density, and organic matter. The objective of this study was to develop neural networks (NNs) model to estimate saturated hydraulic conductivity from available parameters such as sand and clay contents, bulk density, van Genuchten retention model parameters (i.e. r θ , α , and n) as well as effective porosity. We used two methods to calculate effective porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s θ is saturated water content, FC θ is water content retained at -33 kPa matric potential, and inf θ is water content at the inflection point. Total of 311 soil samples from the UNSODA database was divided into three groups as 187 for the training, 62 for the validation (to avoid over training), and 62 for the test of NNs model. A commercial neural network toolbox of MATLAB software with a multi-layer perceptron model and back propagation algorithm were used for the training procedure. The statistical parameters such as correlation coefficient (R2), and mean square error (MSE) were also used to evaluate the developed NNs model. The best number of neurons in the middle layer of NNs model for methods (1) and (2) were calculated 44 and 6, respectively. The R2 and MSE values of the test phase were determined for method (1), 0.94 and 0.0016, and for method (2), 0.98 and 0.00065, respectively, which shows that method (2) estimates saturated hydraulic conductivity better than method (1).

Keywords: Neural network, Saturated hydraulic conductivity, Soil physical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557
78 Analysis of Linked in Series Servers with Blocking, Priority Feedback Service and Threshold Policy

Authors: Walenty Oniszczuk

Abstract:

The use of buffer thresholds, blocking and adequate service strategies are well-known techniques for computer networks traffic congestion control. This motivates the study of series queues with blocking, feedback (service under Head of Line (HoL) priority discipline) and finite capacity buffers with thresholds. In this paper, the external traffic is modelled using the Poisson process and the service times have been modelled using the exponential distribution. We consider a three-station network with two finite buffers, for which a set of thresholds (tm1 and tm2) is defined. This computer network behaves as follows. A task, which finishes its service at station B, gets sent back to station A for re-processing with probability o. When the number of tasks in the second buffer exceeds a threshold tm2 and the number of task in the first buffer is less than tm1, the fed back task is served under HoL priority discipline. In opposite case, for fed backed tasks, “no two priority services in succession" procedure (preventing a possible overflow in the first buffer) is applied. Using an open Markovian queuing schema with blocking, priority feedback service and thresholds, a closed form cost-effective analytical solution is obtained. The model of servers linked in series is very accurate. It is derived directly from a twodimensional state graph and a set of steady-state equations, followed by calculations of main measures of effectiveness. Consequently, efficient expressions of the low computational cost are determined. Based on numerical experiments and collected results we conclude that the proposed model with blocking, feedback and thresholds can provide accurate performance estimates of linked in series networks.

Keywords: Blocking, Congestion control, Feedback, Markov chains, Performance evaluation, Threshold-base networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1294
77 How Children Synchronize with Their Teacher: Evidence from a Real-World Elementary School Classroom

Authors: Reiko Yamamoto

Abstract:

This paper reports on how synchrony occurs between children and their teacher, and what prevents or facilitates synchrony. The aim of the experiment conducted in this study was to precisely analyze their movements and synchrony and reveal the process of synchrony in a real-world classroom. Specifically, the experiment was conducted for around 20 minutes during an English as a foreign language (EFL) lesson. The participants were 11 fourth-grade school children and their classroom teacher in a public elementary school in Japan. Previous researchers assert that synchrony causes the state of flow in a class. For checking the level of flow, Short Flow State Scale (SFSS) was adopted. The experimental procedure had four steps: 1) The teacher read aloud the first half of an English storybook to the children. Both the teacher and the children were at their own desks. 2) The children were subjected to an SFSS check. 3) The teacher read aloud the remaining half of the storybook to the children. She made the children remove their desks before reading. 4) The children were again subjected to an SFSS check. The movements of all participants were recorded with a video camera. From the movement analysis, it was found that the children synchronized better with the teacher in Step 3 than in Step 1, and that the teacher’s movement became free and outstanding without a desk. This implies that the desk acted as a barrier between the children and the teacher. Removal of this barrier resulted in the children’s reactions becoming synchronized with those of the teacher. The SFSS results proved that the children experienced more flow without a barrier than with a barrier. Apparently, synchrony is what caused flow or social emotions in the classroom. The main conclusion is that synchrony leads to cognitive outcomes such as children’s academic performance in EFL learning.

Keywords: Movement synchrony, teacher–child relationships, English as a foreign language, EFL learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 693
76 Application of Pulse Doubling in Star-Connected Autotransformer Based 12-Pulse AC-DC Converter for Power Quality Improvement

Authors: Rohollah. Abdollahi, Alireza. Jalilian

Abstract:

This paper presents a pulse doubling technique in a 12-pulse ac-dc converter which supplies direct torque controlled motor drives (DTCIMD-s) in order to have better power quality conditions at the point of common coupling. The proposed technique increases the number of rectification pulses without significant changes in the installations and yields in harmonic reduction in both ac and dc sides. The 12-pulse rectified output voltage is accomplished via two paralleled six-pulse ac-dc converters each of them consisting of three-phase diode bridge rectifier. An autotransformer is designed to supply the rectifiers. The design procedure of magnetics is in a way such that makes it suitable for retrofit applications where a six-pulse diode bridge rectifier is being utilized. Independent operation of paralleled diode-bridge rectifiers, i.e. dc-ripple re-injection methodology, requires a Zero Sequence Blocking Transformer (ZSBT). Finally, a tapped interphase reactor is connected at the output of ZSBT to double the pulse numbers of output voltage up to 24 pulses. The aforementioned structure improves power quality criteria at ac mains and makes them consistent with the IEEE-519 standard requirements for varying loads. Furthermore, near unity power factor is obtained for a wide range of DTCIMD operation. A comparison is made between 6- pulse, 12-pulse, and proposed converters from view point of power quality indices. Results show that input current total harmonic distortion (THD) is less than 5% for the proposed topology at various loads.

Keywords: AC–DC converter, star-connected autotransformer, power quality, 24 pulse rectifier, Pulse Doubling, direct torquecontrolled induction motor drive (DTCIMD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2866
75 Structural Behavior of Precast Foamed Concrete Sandwich Panel Subjected to Vertical In-Plane Shear Loading

Authors: Y. H. Mugahed Amran, Raizal S. M. Rashid, Farzad Hejazi, Nor Azizi Safiee, A. A. Abang Ali

Abstract:

Experimental and analytical studies were accomplished to examine the structural behavior of precast foamed concrete sandwich panel (PFCSP) under vertical in-plane shear load. PFCSP full-scale specimens with total number of six were developed with varying heights to study an important parameter slenderness ratio (H/t). The production technique of PFCSP and the procedure of test setup were described. The results obtained from the experimental tests were analysed in the context of in-plane shear strength capacity, load-deflection profile, load-strain relationship, slenderness ratio, shear cracking patterns and mode of failure. Analytical study of finite element analysis was implemented and the theoretical calculations of the ultimate in-plane shear strengths using the adopted ACI318 equation for reinforced concrete wall were determined aimed at predicting the in-plane shear strength of PFCSP. The decrease in slenderness ratio from 24 to 14 showed an increase of 26.51% and 21.91% on the ultimate in-plane shear strength capacity as obtained experimentally and in FEA models, respectively. The experimental test results, FEA models data and theoretical calculation values were compared and provided a significant agreement with high degree of accuracy. Therefore, on the basis of the results obtained, PFCSP wall has the potential use as an alternative to the conventional load-bearing wall system.

Keywords: Deflection profiles, foamed concrete, load-strain relationships, precast foamed concrete sandwich panel, slenderness ratio, vertical in-plane shear strength capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2648
74 Thiopental-Fentanyl versus Midazolam-Fentanyl for Emergency Department Procedural Sedation and Analgesia in Patients with Shoulder Dislocation and Distal Radial Fracture-Dislocation: A Randomized Double-Blind Controlled Trial

Authors: D. Farsi, Gh. Dokhtvasi, S. Abbasi, S. Shafiee Ardestani, E. Payani

Abstract:

Background and aim: It has not been well studied whether fentanyl-thiopental (FT) is effective and safe for PSA in orthopedic procedures in Emergency Department (ED). The aim of this trial was to evaluate the effectiveness of intravenous FT versus fentanyl-midazolam (FM) in patients who suffered from shoulder dislocation or distal radial fracture-dislocation. Methods: In this randomized double-blinded study, Seventy-six eligible patients were entered the study and randomly received intravenous FT or FM. The success rate, onset of action and recovery time, pain score, physicians’ satisfaction and adverse events were assessed and recorded by treating emergency physicians. The statistical analysis was intention to treat. Results: The success rate after administrating loading dose in FT group was significantly higher than FM group (71.7% vs. 48.9%, p=0.04); however, the ultimate unsuccessful rate after 3 doses of drugs in the FT group was higher than the FM group (3 to 1) but it did not reach to significant level (p=0.61). Despite near equal onset of action time in two study group (P=0.464), the recovery period in patients receiving FT was markedly shorter than FM group (P<0.001). The occurrence of adverse effects was low in both groups (p=0.31). Conclusion: PSA using FT is effective and appears to be safe for orthopedic procedures in the ED. Therefore, regarding the prompt onset of action, short recovery period of thiopental, it seems that this combination can be considered more for performing PSA in orthopedic procedures in ED.

Keywords: Procedural Sedation and Analgesia, Thiopental, Fentanyl, Midazolam, Orthopedic Procedure, Emergency Department, Pain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2113
73 Thermal and Electrical Properties of Carbon Nanotubes Purified by Acid Digestion

Authors: Neslihan Yuca, Nilgün Karatepe, Fahrettin Yakuphanoğlu

Abstract:

Carbon nanotubes (CNTs) possess unique structural, mechanical, thermal and electronic properties, and have been proposed to be used for applications in many fields. However, to reach the full potential of the CNTs, many problems still need to be solved, including the development of an easy and effective purification procedure, since synthesized CNTs contain impurities, such as amorphous carbon, carbon nanoparticles and metal particles. Different purification methods yield different CNT characteristics and may be suitable for the production of different types of CNTs. In this study, the effect of different purification chemicals on carbon nanotube quality was investigated. CNTs were firstly synthesized by chemical vapor deposition (CVD) of acetylene (C2H2) on a magnesium oxide (MgO) powder impregnated with an iron nitrate (Fe(NO3)3·9H2O) solution. The synthesis parameters were selected as: the synthesis temperature of 800°C, the iron content in the precursor of 5% and the synthesis time of 30 min. The liquid phase oxidation method was applied for the purification of the synthesized CNT materials. Three different acid chemicals (HNO3, H2SO4, and HCl) were used in the removal of the metal catalysts from the synthesized CNT material to investigate the possible effects of each acid solution to the purification step. Purification experiments were carried out at two different temperatures (75 and 120 °C), two different acid concentrations (3 and 6 M) and for three different time intervals (6, 8 and 15 h). A 30% H2O2 : 3M HCl (1:1 v%) solution was also used in the purification step to remove both the metal catalysts and the amorphous carbon. The purifications using this solution were performed at the temperature of 75°C for 8 hours. Purification efficiencies at different conditions were evaluated by thermogravimetric analysis. Thermal and electrical properties of CNTs were also determined. It was found that the obtained electrical conductivity values for the carbon nanotubes were typical for organic semiconductor materials and thermal stabilities were changed depending on the purification chemicals.

Keywords: Carbon nanotubes, purification, acid digestion, thermalstability, electrical conductivity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2396
72 The Effect of Stone Column (Nailing and Geogrid) on Stability of Expansive Clay

Authors: Komeil Valipourian, Mohsen Ramezan Shirazi, Orod Zarrin Kafsh

Abstract:

By enhancing the applicatıon of grounds for establishment and due to the lack of appropriate sites, engineers attempt to seek out a new method to reduce the weakness of soils. İn aspect of economic situation, various ways have been used to decrease the weak grounds. Because of the rapid development of infrastructural facilities, spreading the construction operation is an obligation. Furthermore, in various sites with the really bad soil situation, engineers have considered obvious problems. One of the most essential ways for developing the weak soils is stone column. Obviously, the method was introduced in France in 1830 to improve a native soil initially. Stone columns have an expanding range of usage in different rough foundation sites all over the world to increase the bearing capacity, to reduce the whole and differential settlements, to enhance the rate of consolidation, to stabilize slopes stability of embankments and to increase the liquefaction resistance as well. A recent procedure called installing vertical nails along the round stone columns in order to make better the performance of considered columns is offered. Moreover, thanks to the enhancing the nail diameter, number and embedment nail depth, the positive points of vertical circumferential nails increases. Based on the result of this study, load caring capacity will be develop with enhancing the length and the power of reinforcements in vertical encasement stone column (CESC). In this study, the main purpose is comparing two methods of stone columns (installed a nail surrounding the stone columns and using geogrid on clay) for enhancing the bearing capacity, decreasing the whole and various settlements.

Keywords: Bearing Capacity, Clay, Geogrid, Nailing, Settlements, Stone Column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2866
71 Effect of Fire Retardant Painting Product on Smoke Optical Density of Burning Natural Wood Samples

Authors: Abdullah N. Olimat, Ahmad S. Awad, Faisal M. AL-Ghathian

Abstract:

Natural wood is used in many applications in Jordan such as furniture, partitions constructions, and cupboards. Experimental work for smoke produced by the combustion of certain wood samples was studied. Smoke generated from burning of natural wood, is considered as a major cause of death in furniture fires. The critical parameter for life safety in fires is the available time for escape, so the visual obscuration due to smoke release during fire is taken into consideration. The effect of smoke, produced by burning of wood, depends on the amount of smoke released in case of fire. The amount of smoke production, apparently, affects the time available for the occupants to escape. To achieve the protection of life of building occupants during fire growth, fire retardant painting products are tested. The tested samples of natural wood include Beech, Ash, Beech Pine, and white Beech Pine. A smoke density chamber manufactured by fire testing technology has been used to perform measurement of smoke properties. The procedure of test was carried out according to the ISO-5659. A nonflammable vertical radiant heat flux of 25 kW/m2 is exposed to the wood samples in a horizontal orientation. The main objective of the current study is to carry out the experimental tests for samples of natural woods to evaluate the capability to escape in case of fire and the fire safety requirements. Specific optical density, transmittance, thermal conductivity, and mass loss are main measured parameters. Also, comparisons between samples with paint and with no paint are carried out between the selected samples of woods.

Keywords: Optical density, specific optical density, transmittance, visibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1103
70 High Sensitivity Crack Detection and Locating with Optimized Spatial Wavelet Analysis

Authors: A. Ghanbari Mardasi, N. Wu, C. Wu

Abstract:

In this study, a spatial wavelet-based crack localization technique for a thick beam is presented. Wavelet scale in spatial wavelet transformation is optimized to enhance crack detection sensitivity. A windowing function is also employed to erase the edge effect of the wavelet transformation, which enables the method to detect and localize cracks near the beam/measurement boundaries. Theoretical model and vibration analysis considering the crack effect are first proposed and performed in MATLAB based on the Timoshenko beam model. Gabor wavelet family is applied to the beam vibration mode shapes derived from the theoretical beam model to magnify the crack effect so as to locate the crack. Relative wavelet coefficient is obtained for sensitivity analysis by comparing the coefficient values at different positions of the beam with the lowest value in the intact area of the beam. Afterward, the optimal wavelet scale corresponding to the highest relative wavelet coefficient at the crack position is obtained for each vibration mode, through numerical simulations. The same procedure is performed for cracks with different sizes and positions in order to find the optimal scale range for the Gabor wavelet family. Finally, Hanning window is applied to different vibration mode shapes in order to overcome the edge effect problem of wavelet transformation and its effect on the localization of crack close to the measurement boundaries. Comparison of the wavelet coefficients distribution of windowed and initial mode shapes demonstrates that window function eases the identification of the cracks close to the boundaries.

Keywords: Edge effect, scale optimization, small crack locating, spatial wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 949
69 Deregulation of Turkish State Railways Based on Public-Private Partnership Approaches

Authors: S. Shakibaei, P. Alpkokin

Abstract:

The railway network is one of the major components of a transportation system in a country which may be an indicator of the country’s level of economic improvement. Since 2000s on, revival of national railways and development of High Speed Rail (HSR) lines are one of the most remarkable policies of Turkish government in railway sector. Within this trend, the railway age is to be revived and coming decades will be a golden opportunity. Indubitably, major infrastructures such as road and railway networks require sizeable investment capital, precise maintenance and reparation. Traditionally, governments are held responsible for funding, operating and maintaining these infrastructures. However, lack or shortage of financial resources, risk responsibilities (particularly cost and time overrun), and in some cases inefficacy in constructional, operational and management phases persuade governments to find alternative options. Financial power, efficient experiences and background of private sector are the factors convincing the governments to make a collaboration with private parties to develop infrastructures. Public-Private Partnerships (PPP or 3P or P3) and related regulatory issues are born considering these collaborations. In Turkey, PPP approaches have attracted attention particularly during last decade and these types of investments have been accelerated by government to overcome budget limitations and cope with inefficacy of public sector in improving transportation network and its operation. This study mainly tends to present a comprehensive overview of PPP concept, evaluate the regulatory procedure in Europe and propose a general framework for Turkish State Railways (TCDD) as an outlook on privatization, liberalization and deregulation of railway network.

Keywords: Deregulation, high-speed rail, liberalization, privatization, public-private partnership.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1085
68 Optimization of Quercus cerris Bark Liquefaction

Authors: Luísa P. Cruz-Lopes, Hugo Costa e Silva, Idalina Domingos, José Ferreira, Luís Teixeira de Lemos, Bruno Esteves

Abstract:

The liquefaction process of cork based tree barks has led to an increase of interest due to its potential innovation in the lumber and wood industries. In this particular study the bark of Quercus cerris (Turkish oak) is used due to its appreciable amount of cork tissue, although of inferior quality when compared to the cork provided by other Quercus trees. This study aims to optimize alkaline catalysis liquefaction conditions, regarding several parameters. To better comprehend the possible chemical characteristics of the bark of Quercus cerris, a complete chemical analysis was performed. The liquefaction process was performed in a double-jacket reactor heated with oil, using glycerol and a mixture of glycerol/ethylene glycol as solvents, potassium hydroxide as a catalyst, and varying the temperature, liquefaction time and granulometry. Due to low liquefaction efficiency resulting from the first experimental procedures a study was made regarding different washing techniques after the filtration process using methanol and methanol/water. The chemical analysis stated that the bark of Quercus cerris is mostly composed by suberin (ca. 30%) and lignin (ca. 24%) as well as insolvent hemicelluloses in hot water (ca. 23%). On the liquefaction stage, the results that led to higher yields were: using a mixture of methanol/ethylene glycol as reagents and a time and temperature of 120 minutes and 200 ºC, respectively. It is concluded that using a granulometry of <80 mesh leads to better results, even if this parameter barely influences the liquefaction efficiency. Regarding the filtration stage, washing the residue with methanol and then distilled water leads to a considerable increase on final liquefaction percentages, which proves that this procedure is effective at liquefying suberin content and lignocellulose fraction.

Keywords: Liquefaction, alkaline catalysis, optimization, Quercus cerris bark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
67 Liability Aspects Related to Genetically Modified Food under the Food Safety Legislation in India

Authors: S. K. Balashanmugam, Padmavati Manchikanti, S. R. Subramanian

Abstract:

The question of legal liability over injury arising out of the import and the introduction of GM food emerges as a crucial issue confronting to promote GM food and its derivatives. There is a greater possibility of commercialized GM food from the exporting country to enter importing country where status of approval shall not be same. This necessitates the importance of fixing a liability mechanism to discuss the damage, if any, occurs at the level of transboundary movement or at the market. There was a widespread consensus to develop the Cartagena Protocol on Biosafety and to give for a dedicated regime on liability and redress in the form of Nagoya Kuala Lumpur Supplementary Protocol on the Liability and Redress (‘N-KL Protocol’) at the international context. The national legal frameworks based on this protocol are not adequately established in the prevailing food legislations of the developing countries. The developing economy like India is willing to import GM food and its derivatives after the successful commercialization of Bt Cotton in 2002. As a party to the N-KL Protocol, it is indispensable for India to formulate a legal framework and to discuss safety, liability, and regulatory issues surrounding GM foods in conformity to the provisions of the Protocol. The liability mechanism is also important in the case where the risk assessment and risk management is still in implementing stage. Moreover, the country is facing GM infiltration issues with its neighbors Bangladesh. As a precautionary approach, there is a need to formulate rules and procedure of legal liability to discuss any kind of damage occurs at transboundary trade. In this context, the proposed work will attempt to analyze the liability regime in the existing Food Safety and Standards Act, 2006 from the applicability and domestic compliance and to suggest legal and policy options for regulatory authorities.

Keywords: Commercialisation, food safety, FSSAI, genetically modified foods, India, liability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2233
66 A Compact Via-less Ultra-Wideband Microstrip Filter by Utilizing Open-Circuit Quarter Wavelength Stubs

Authors: Muhammad Yasir Wadood, Fatemeh Babaeian

Abstract:

By developing ultra-wideband (UWB) systems, there is a high demand for UWB filters with low insertion loss, wide bandwidth, and having a planar structure which is compatible with other components of the UWB system. A microstrip interdigital filter is a great option for designing UWB filters. However, the presence of via holes in this structure creates difficulties in the fabrication procedure of the filter. Especially in the higher frequency band, any misalignment of the drilled via hole with the Microstrip stubs causes large errors in the measurement results compared to the desired results. Moreover, in this case (high-frequency designs), the line width of the stubs are very narrow, so highly precise small via holes are required to be implemented, which increases the cost of fabrication significantly. Also, in this case, there is a risk of having fabrication errors. To combat this issue, in this paper, a via-less UWB microstrip filter is proposed which is designed based on a modification of a conventional inter-digital bandpass filter. The novel approaches in this filter design are 1) replacement of each via hole with a quarter-wavelength open circuit stub to avoid the complexity of manufacturing, 2) using a bend structure to reduce the unwanted coupling effects and 3) minimising the size. Using the proposed structure, a UWB filter operating in the frequency band of 3.9-6.6 GHz (1-dB bandwidth) is designed and fabricated. The promising results of the simulation and measurement are presented in this paper. The selected substrate for these designs was Rogers RO4003 with a thickness of 20 mils. This is a common substrate in most of the industrial projects. The compact size of the proposed filter is highly beneficial for applications which require a very miniature size of hardware.

Keywords: Band-pass filters, inter-digital filter, microstrip, via-less.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 834
65 Effect of a Gravel Bed Flocculator on the Efficiency of a Low Cost Water Treatment Plants

Authors: Alaa Hussein Wadi

Abstract:

The principal objective of a water treatment plant is to produce water that satisfies a set of drinking water quality standards at a reasonable price to the consumers. The gravel-bed flocculator provide a simple and inexpensive design for flocculation in small water treatment plants (less than 5000 m3/day capacity). The packed bed of gravel provides ideal conditions for the formation of compact settleable flocs because of continuous recontact provided by the sinuous flow of water through the interstices formed by the gravel. The field data which were obtained from the operation of the water supply treatment unit cover the physical, chemical and biological water qualities of the raw and settled water as obtained by the operation of the treatment unit. The experiments were carried out with the aim of assessing the efficiency of the gravel filter in removing the turbidity, pathogenic bacteria, from the raw water. The water treatment plant, which was constructed for the treatment of river water, was in principle a rapid sand filter. The results show that the average value of the turbidity level of the settled water was 4.83 NTU with a standard deviation of turbidity 2.893 NTU. This indicated that the removal efficiency of the sedimentation tank (gravel filter) was about 67.8 %. for pH values fluctuated between 7.75 and 8.15, indicating the alkaline nature of the raw water of the river Shatt Al-Hilla, as expected. Raw water pH is depressed slightly following alum coagulation. The pH of the settled water ranged from 7.75 to a maximum of 8.05. The bacteriological tests which were carried out on the water samples were: total coliform test, E-coli test, and the plate count test. In each test the procedure used was as outlined in the Standard Methods for the Examination of Water and Wastewater (APHA, AWWA, and WPCF, 1985). The gravel filter exhibit a low performance in removing bacterial load. The percentage bacterial removal, which is maximum for total plate count (19%) and minimum for total coliform (16.82%).

Keywords: Gravel bed flocculator, turbidity, total coliform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2674
64 Nanomaterial Based Electrochemical Sensors for Endocrine Disrupting Compounds

Authors: Gaurav Bhanjana, Ganga Ram Chaudhary, Sandeep Kumar, Neeraj Dilbaghi

Abstract:

Main sources of endocrine disrupting compounds in the ecosystem are hormones, pesticides, phthalates, flame retardants, dioxins, personal-care products, coplanar polychlorinated biphenyls (PCBs), bisphenol A, and parabens. These endocrine disrupting compounds are responsible for learning disabilities, brain development problems, deformations of the body, cancer, reproductive abnormalities in females and decreased sperm count in human males. Although discharge of these chemical compounds into the environment cannot be stopped, yet their amount can be retarded through proper evaluation and detection techniques. The available techniques for determination of these endocrine disrupting compounds mainly include high performance liquid chromatography (HPLC), mass spectroscopy (MS) and gas chromatography-mass spectrometry (GC–MS). These techniques are accurate and reliable but have certain limitations like need of skilled personnel, time consuming, interference and requirement of pretreatment steps. Moreover, these techniques are laboratory bound and sample is required in large amount for analysis. In view of above facts, new methods for detection of endocrine disrupting compounds should be devised that promise high specificity, ultra sensitivity, cost effective, efficient and easy-to-operate procedure. Nowadays, electrochemical sensors/biosensors modified with nanomaterials are gaining high attention among researchers. Bioelement present in this system makes the developed sensors selective towards analyte of interest. Nanomaterials provide large surface area, high electron communication feature, enhanced catalytic activity and possibilities of chemical modifications. In most of the cases, nanomaterials also serve as an electron mediator or electrocatalyst for some analytes.

Keywords: Sensors, endocrine disruptors, nanoparticles, electrochemical, microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1576