Search results for: Water level model.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11691

Search results for: Water level model.

321 Distributed Cost-Based Scheduling in Cloud Computing Environment

Authors: Rupali, Anil Kumar Jaiswal

Abstract:

Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc.  Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively.  Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.

Keywords: Physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 837
320 CFD-Parametric Study in Stator Heat Transfer of an Axial Flux Permanent Magnet Machine

Authors: Alireza Rasekh, Peter Sergeant, Jan Vierendeels

Abstract:

This paper copes with the numerical simulation for convective heat transfer in the stator disk of an axial flux permanent magnet (AFPM) electrical machine. Overheating is one of the main issues in the design of AFMPs, which mainly occurs in the stator disk, so that it needs to be prevented. A rotor-stator configuration with 16 magnets at the periphery of the rotor is considered. Air is allowed to flow through openings in the rotor disk and channels being formed between the magnets and in the gap region between the magnets and the stator surface. The rotating channels between the magnets act as a driving force for the air flow. The significant non-dimensional parameters are the rotational Reynolds number, the gap size ratio, the magnet thickness ratio, and the magnet angle ratio. The goal is to find correlations for the Nusselt number on the stator disk according to these non-dimensional numbers. Therefore, CFD simulations have been performed with the multiple reference frame (MRF) technique to model the rotary motion of the rotor and the flow around and inside the machine. A minimization method is introduced by a pattern-search algorithm to find the appropriate values of the reference temperature. It is found that the correlations are fast, robust and is capable of predicting the stator heat transfer with a good accuracy. The results reveal that the magnet angle ratio diminishes the stator heat transfer, whereas the rotational Reynolds number and the magnet thickness ratio improve the convective heat transfer. On the other hand, there a certain gap size ratio at which the stator heat transfer reaches a maximum.

Keywords: Axial flux permanent magnet, CFD, magnet parameters, stator heat transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1470
319 Q-Map: Clinical Concept Mining from Clinical Documents

Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala

Abstract:

Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.

Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 767
318 Differences in Stress and Total Deformation Due to Muscle Attachment to the Femur

Authors: Jeong-Woo Seo, Jin-Seung Choi, Dong-Won Kang, Jae-Hyuk Bae, Gye-Rae Tack

Abstract:

To achieve accurate and precise results of finite element analysis (FEA) of bones, it is important to represent the load/boundary conditions as identical as possible to the human body such as the bone properties, the type and force of the muscles, the contact force of the joints, and the location of the muscle attachment. In this study, the difference in the Von-Mises stress and the total deformation was compared by classifying them into Case 1, which shows the actual anatomical form of the muscle attached to the femur when the same muscle force was applied, and Case 2, which gives a simplified representation of the attached location. An inverse dynamical musculoskeletal model was simulated using data from an actual walking experiment to complement the accuracy of the muscular force, the input value of FEA. The FEA method using the results of the muscular force that were calculated through the simulation showed that the maximum Von-Mises stress and the maximum total deformation in Case 2 were underestimated by 8.42% and 6.29%, respectively, compared to Case 1. The torsion energy and bending moment at each location of the femur occurred via the stress ingredient. Due to the geometrical/morphological feature of the femur of having a long bone shape when the stress distribution is wide, as shown in Case 1, a greater Von-Mises stress and total deformation are expected from the sum of the stress ingredients. More accurate results can be achieved only when the muscular strength and the attachment location in the FEA of the bones and the attachment form are the same as those in the actual anatomical condition under the various moving conditions of the human body.

Keywords: Musculoskeletal modeling, Finite element analysis, Von-Mises stress, Deformation, Muscle attachment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2213
317 A Linear Regression Model for Estimating Anxiety Index Using Wide Area Frontal Lobe Brain Blood Volume

Authors: Takashi Kaburagi, Masashi Takenaka, Yosuke Kurihara, Takashi Matsumoto

Abstract:

Major depressive disorder (MDD) is one of the most common mental illnesses today. It is believed to be caused by a combination of several factors, including stress. Stress can be quantitatively evaluated using the State-Trait Anxiety Inventory (STAI), one of the best indices to evaluate anxiety. Although STAI scores are widely used in applications ranging from clinical diagnosis to basic research, the scores are calculated based on a self-reported questionnaire. An objective evaluation is required because the subject may intentionally change his/her answers if multiple tests are carried out. In this article, we present a modified index called the “multi-channel Laterality Index at Rest (mc-LIR)” by recording the brain activity from a wider area of the frontal lobe using multi-channel functional near-infrared spectroscopy (fNIRS). The presented index aims to measure multiple positions near the Fpz defined by the international 10-20 system positioning. Using 24 subjects, the dependencies on the number of measuring points used to calculate the mc-LIR and its correlation coefficients with the STAI scores are reported. Furthermore, a simple linear regression was performed to estimate the STAI scores from mc-LIR. The cross-validation error is also reported. The experimental results show that using multiple positions near the Fpz will improve the correlation coefficients and estimation than those using only two positions.

Keywords: Stress, functional near-infrared spectroscopy, frontal lobe, state-trait anxiety inventory score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1156
316 The Effect of Socio-Affective Variables in the Relationship between Organizational Trust and Employee Turnover Intention

Authors: Paula A. Cruise, Carvell McLeary

Abstract:

Employee turnover leads to lowered productivity, decreased morale and work quality, and psychological effects associated with employee separation and replacement. Yet, it remains unknown why talented employees willingly withdraw from organizations. This uncertainty is worsened as studies; a) priorities organizational over individual predictors resulting in restriction in range in turnover measurement; b) focus on actual rather than intended turnover thereby limiting conceptual understanding of the turnover construct and its relationship with other variables and; c) produce inconsistent findings across cultures, contexts and industries despite a clear need for a unified perspective. The current study addressed these gaps by adopting the theory of planned behavior (TPB) framework to examine socio-cognitive factors in organizational trust and individual turnover intentions among bankers and energy employees in Jamaica. In a comparative study of n=369 [nbank= 264; male=57 (22.73%); nenergy =105; male =45 (42.86)], it was hypothesized that organizational trust was a predictor of employee turnover intention, and the effect of individual, group, cognitive and socio-affective variables varied across industry. Findings from structural equation modelling confirmed the hypothesis, with a model of both cognitive and socio-affective variables being a better fit [CMIN (χ2) = 800.067, df = 364, p ≤ .000; CFI = 0.950; RMSEA = 0.057 with 90% C.I. (0.052 - 0.062); PCLOSE = 0.016; PNFI = 0.818 in predicting turnover intention. The findings are discussed in relation to socio-cognitive components of trust models and predicting negative employee behaviors across cultures and industries.

Keywords: Context-specific organizational trust, cross-cultural psychology, theory of planned behavior, employee turnover intention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1077
315 Analyzing the Impact of Spatio-Temporal Climate Variations on the Rice Crop Calendar in Pakistan

Authors: Muhammad Imran, Iqra Basit, Mobushir Riaz Khan, Sajid Rasheed Ahmad

Abstract:

The present study investigates the space-time impact of climate change on the rice crop calendar in tropical Gujranwala, Pakistan. The climate change impact was quantified through the climatic variables, whereas the existing calendar of the rice crop was compared with the phonological stages of the crop, depicted through the time series of the Normalized Difference Vegetation Index (NDVI) derived from Landsat data for the decade 2005-2015. Local maxima were applied on the time series of NDVI to compute the rice phonological stages. Panel models with fixed and cross-section fixed effects were used to establish the relation between the climatic parameters and the time-series of NDVI across villages and across rice growing periods. Results show that the climatic parameters have significant impact on the rice crop calendar. Moreover, the fixed effect model is a significant improvement over cross-sectional fixed effect models (R-squared equal to 0.673 vs. 0.0338). We conclude that high inter-annual variability of climatic variables cause high variability of NDVI, and thus, a shift in the rice crop calendar. Moreover, inter-annual (temporal) variability of the rice crop calendar is high compared to the inter-village (spatial) variability. We suggest the local rice farmers to adapt this change in the rice crop calendar.

Keywords: Landsat NDVI, panel models, temperature, rainfall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 902
314 Impact of Long Term Application of Municipal Solid Waste on Physicochemical and Microbial Parameters and Heavy Metal Distribution in Soils in Accordance to Its Agricultural Uses

Authors: Rinku Dhanker, Suman Chaudhary, Tanvi Bhatia, Sneh Goyal

Abstract:

Municipal Solid Waste (MSW), being a rich source of organic materials, can be used for agricultural applications as an important source of nutrients for soil and plants. This is also an alternative beneficial management practice for MSW generated in developing countries. In the present study, MSW treated soil samples from last four to six years at farmer’s field in Rohtak and Gurgaon states (Haryana, India) were collected. The samples were analyzed for all-important agricultural parameters and compared with the control untreated soil samples. The treated soil at farmer’s field showed increase in total N by 48 to 68%, P by 45.7 to 51.3%, and K by 60 to 67% compared to untreated soil samples. Application of sewage sludge at different sites led to increase in microbial biomass C by 60 to 68% compared to untreated soil. There was significant increase in total Cu, Cr, Ni, Fe, Pb, and Zn in all sewage sludge amended soil samples; however, concentration of all the metals were still below the current permitted (EU) limits. To study the adverse effect of heavy metals accumulation on various soil microbial activities, the sewage sludge samples (from wastewater treatment plant at Gurgaon) were artificially contaminated with heavy metal concentration above the EU limits. They were then applied to soil samples with different rates (0.5 to 4.0%) and incubated for 90 days under laboratory conditions. The samples were drawn at different intervals and analyzed for various parameters like pH, EC, total N, P, K, microbial biomass C, carbon mineralization, and diethylenetriaminepentaacetic acid (DTPA) exactable heavy metals. The results were compared to the uncontaminated sewage sludge. The increasing level of sewage sludge from 0.5 to 4% led to build of organic C and total N, P and K content at the early stages of incubation. But, organic C was decreased after 90 days because of decomposition of organic matter. Biomass production was significantly increased in both contaminated and uncontaminated sewage soil samples, but also led to slight increases in metal accumulation and their bioavailability in soil. The maximum metal concentrations were found in treatment with 4% of contaminated sewage sludge amendment.

Keywords: Heavy metals, municipal sewage sludge, sustainable agriculture, soil fertility, quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1293
313 Studying the Temperature Field of Hypersonic Vehicle Structure with Aero-Thermo-Elasticity Deformation

Authors: Geng Xiangren, Liu Lei, Gui Ye-Wei, Tang Wei, Wang An-ling

Abstract:

The malfunction of thermal protection system (TPS) caused by aerodynamic heating is a latent trouble to aircraft structure safety. Accurately predicting the structure temperature field is quite important for the TPS design of hypersonic vehicle. Since Thornton’s work in 1988, the coupled method of aerodynamic heating and heat transfer has developed rapidly. However, little attention has been paid to the influence of structural deformation on aerodynamic heating and structural temperature field. In the flight, especially the long-endurance flight, the structural deformation, caused by the aerodynamic heating and temperature rise, has a direct impact on the aerodynamic heating and structural temperature field. Thus, the coupled interaction cannot be neglected. In this paper, based on the method of static aero-thermo-elasticity, considering the influence of aero-thermo-elasticity deformation, the aerodynamic heating and heat transfer coupled results of hypersonic vehicle wing model were calculated. The results show that, for the low-curvature region, such as fuselage or center-section wing, structure deformation has little effect on temperature field. However, for the stagnation region with high curvature, the coupled effect is not negligible. Thus, it is quite important for the structure temperature prediction to take into account the effect of elastic deformation. This work has laid a solid foundation for improving the prediction accuracy of the temperature distribution of aircraft structures and the evaluation capacity of structural performance.

Keywords: Aero-thermo-elasticity, elastic deformation, structural temperature, multi-field coupling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 885
312 Enhanced-Delivery Overlay Multicasting Scheme by Optimizing Bandwidth and Latency Discrepancy Ratios

Authors: Omar F. Hamad, T. Marwala

Abstract:

With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.

Keywords: Overlay multicast, Available bandwidth, Max-heapform overlay, Induced packet loss, Bandwidth-latency product, Node Gain Score (NGS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
311 Analytical and Numerical Results for Free Vibration of Laminated Composites Plates

Authors: Mohamed Amine Ben Henni, Taher Hassaine Daouadji, Boussad Abbes, Yu Ming Li, Fazilay Abbes

Abstract:

The reinforcement and repair of concrete structures by bonding composite materials have become relatively common operations. Different types of composite materials can be used: carbon fiber reinforced polymer (CFRP), glass fiber reinforced polymer (GFRP) as well as functionally graded material (FGM). The development of analytical and numerical models describing the mechanical behavior of structures in civil engineering reinforced by composite materials is necessary. These models will enable engineers to select, design, and size adequate reinforcements for the various types of damaged structures. This study focuses on the free vibration behavior of orthotropic laminated composite plates using a refined shear deformation theory. In these models, the distribution of transverse shear stresses is considered as parabolic satisfying the zero-shear stress condition on the top and bottom surfaces of the plates without using shear correction factors. In this analysis, the equation of motion for simply supported thick laminated rectangular plates is obtained by using the Hamilton’s principle. The accuracy of the developed model is demonstrated by comparing our results with solutions derived from other higher order models and with data found in the literature. Besides, a finite-element analysis is used to calculate the natural frequencies of laminated composite plates and is compared with those obtained by the analytical approach.

Keywords: Composites materials, laminated composite plate, shear deformation theory of plates, finite element analysis, free vibration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 839
310 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme Gradient Boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impairment, multiclass classification, ADNI, support vector machine, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 944
309 Time Series Forecasting Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean   Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1119
308 On the AC-Side Interface Filter in Three-Phase Shunt Active Power Filter Systems

Authors: Mihaela Popescu, Alexandru Bitoleanu, Mircea Dobriceanu

Abstract:

The proper selection of the AC-side passive filter interconnecting the voltage source converter to the power supply is essential to obtain satisfactory performances of an active power filter system. The use of the LCL-type filter has the advantage of eliminating the high frequency switching harmonics in the current injected into the power supply. This paper is mainly focused on analyzing the influence of the interface filter parameters on the active filtering performances. Some design aspects are pointed out. Thus, the design of the AC interface filter starts from transfer functions by imposing the filter performance which refers to the significant current attenuation of the switching harmonics without affecting the harmonics to be compensated. A Matlab/Simulink model of the entire active filtering system including a concrete nonlinear load has been developed to examine the system performances. It is shown that a gamma LC filter could accomplish the attenuation requirement of the current provided by converter. Moreover, the existence of an optimal value of the grid-side inductance which minimizes the total harmonic distortion factor of the power supply current is pointed out. Nevertheless, a small converter-side inductance and a damping resistance in series with the filter capacitance are absolutely needed in order to keep the ripple and oscillations of the current at the converter side within acceptable limits. The effect of change in the LCL-filter parameters is evaluated. It is concluded that good active filtering performances can be achieved with small values of the capacitance and converter-side inductance.

Keywords: Active power filter, LCL filter, Matlab/Simulinkmodeling, Passive filters, Transfer function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3009
307 Effect of Organic-waste Compost Addition on Leaching of Mineral Nitrogen from Arable Land and Plant Production

Authors: Jakub Elbl, Lukas Plošek, Antonín Kintl, Jaroslav Záhora, Jitka Přichystalová, Jaroslav Hynšt

Abstract:

Application of compost in agriculture is very desirable worldwide. In the Czech Republic, compost is the most often used to improve soil structure and increase the content of soil organic matter, but the effects of compost addition on the fate of mineral nitrogen are only scarcely described. This paper deals with possibility of using combined application of compost, mineral and organic fertilizers to reduce the leaching of mineral nitrogen from arable land. To demonstrate the effect of compost addition on leaching of mineral nitrogen, we performed the pot experiment. As a model crop, Lactuca sativa L. was used and cultivated for 35 days in climate chamber in thoroughly homogenized arable soil. Ten variants of the experiment were prepared; two control variants (pure arable soil and arable soil with added compost), four variants with different doses of mineral and organic fertilizers and four variants of the same doses of mineral and organic fertilizers with the addition of compos. The highest decrease of mineral nitrogen leaching was observed by the simultaneous applications of soluble humic substances and compost to soil samples, about 417% in comparison with the control variant. Application of these organic compounds also supported microbial activity and nitrogen immobilization documented by the highest soil respiration and by the highest value of the index of nitrogen availability. The production of plant biomass after this application was not the highest due to microbial competition for the nutrients in soil, but was 24% higher in comparison with the control variant. To support these promising results the experiment should be repeated in field conditions.

Keywords: Nitrogen, Compost, Salad, Arable land.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2060
306 Infection in the Sentence: The Castration of a Black Woman's Dream of Authorship as Manifested in Buchi Emecheta's Second Class Citizen

Authors: Aseel Hatif Jassam, Hadeel Hatif Jassam

Abstract:

The paper discusses the phallocentric discourse that is challenged by women in general and women of color in particular in spite of the simultaneity of oppression due to race, class, and gender in the diaspora. Therefore, the paper gives a brief account of women's experience in the light of postcolonial feminist theory. The paper also casts light on the theories of Luce Irigaray and Helen Cixous, two feminist theorists who support and advise women to have their own discourse to challenge the infectious patriarchal sentence advocated by Sigmund Freud and Harold Bloom's model of literary history. Black women authors like Buchi Emecheta as well as her alter ego Adah, a Nigerian-born girl and the protagonist of her semi-autobiographical novel, Second Class Citizen, suffer from this phallocentric and oppressive sentence and displacement as they migrate from Nigeria, a former British colony where they feel marginalized, to North London with the hope of realizing their dreams. Yet in the British diaspora, they get culturally shocked and continue to suffer from further marginalization due to class and race and are insulted and inferiorized ironically by their patriarchal husbands who try to put an end to their dreams of authorship. With the phallocentric belief that women are not capable of self-representation in the background of their mindsets, the violent Sylvester Onwordi and Francis Obi, the husbands of both Emecheta and Adah respectively have practiced oppression on them by burning their own authoritative voices, represented by the novels they write while they are struggling with their economically atrocious living experiences in the British diaspora.

Keywords: Authorship, British diaspora, discourse, phallocentric, patriarchy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 337
305 Removal of Total Petroleum Hydrocarbons from Contaminated Soils by Electrochemical Method

Authors: D. M. Cocârță, I. A. Istrate, C. Streche, D. M. Dumitru

Abstract:

Soil contamination phenomena are a wide world issue that has received the important attention in the last decades. The main pollutants that have affected soils are especially those resulted from the oil extraction, transport and processing. This paper presents results obtained in the framework of a research project focused on the management of contaminated sites with petroleum products/ REMPET. One of the specific objectives of the REMPET project was to assess the electrochemical treatment (improved with polarity change respect to the typical approach) as a treatment option for the remediation of total petroleum hydrocarbons (TPHs) from contaminated soils. Petroleum hydrocarbon compounds attach to soil components and are difficult to remove and degrade. Electrochemical treatment is a physicochemical treatment that has gained acceptance as an alternative method, for the remediation of organic contaminated soils comparing with the traditional methods as bioremediation and chemical oxidation. This type of treatment need short time and have high removal efficiency, being usually applied in heterogeneous soils with low permeability. During the experimental tests, the following parameters were monitored: pH, redox potential, humidity, current intensity, energy consumption. The electrochemical method was applied in an experimental setup with the next dimensions: 450 mm x 150 mm x 150 mm (L x l x h). The setup length was devised in three electrochemical cells that were connected at two power supplies. The power supplies configuration was provided in such manner that each cell has a cathode and an anode without overlapping. The initial value of TPH concentration in soil was of 1420.28 mg/kgdw. The remediation method has been applied for only 21 days, when it was already noticed an average removal efficiency of 31 %, with better results in the anode area respect to the cathode one (33% respect to 27%). The energy consumption registered after the development of the experiment was 10.6 kWh for exterior power supply and 16.1 kWh for the interior one. Taking into account that at national level, the most used methods for soil remediation are bioremediation (which needs too much time to be implemented and depends on many factors) and thermal desorption (which involves high costs in order to be implemented), the study of electrochemical treatment will give an alternative to these two methods (and their limitations).

Keywords: Electrochemical remediation, pollution, soil contamination, total petroleum hydrocarbons

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1064
304 Collocation Errors in English as Second Language (ESL) Essay Writing

Authors: Fatima Muhammad Shitu

Abstract:

In language learning, second language learners as well as Native speakers commit errors in their attempt to achieve competence in the target language. The realm of collocation has to do with meaning relation between lexical items. In all human language, there is a kind of ‘natural order’ in which words are arranged or relate to one another in sentences so much so that when a word occurs in a given context, the related or naturally co-occurring word will automatically come to the mind. It becomes an error, therefore, if students inappropriately pair or arrange such ‘naturally’ co–occurring lexical items in a text. It has been observed that most of the second language learners in this research group commit collocation errors. A study of this kind is very significant as it gives insight into the kinds of errors committed by learners. This will help the language teacher to be able to identify the sources and causes of such errors as well as correct them thereby guiding, helping and leading the learners towards achieving some level of competence in the language. The aim of the study is to understand the nature of these errors as stumbling blocks to effective essay writing. The objective of the study is to identify the errors, analyze their structural compositions so as to determine whether there are similarities between students in this regard and to find out whether there are patterns to these kinds of errors which will enable the researcher to understand their sources and causes. As a descriptive research, the researcher samples some nine hundred essays collected from three hundred undergraduate learners of English as a second language in the Federal College of Education, Kano, North- West Nigeria, i.e. three essays per each student. The essays which were given on three different lecture times were of similar thematic preoccupations (i.e. same topics) and length (i.e. same number of words). The essays were written during the lecture hour at three different lecture occasions. The errors were identified in a systematic manner whereby errors so identified were recorded only once even if they occur severally in students’ essays. The data was collated using percentages in which the identified numbers of occurrences were converted accordingly in percentages. The findings from the study indicate that there are similarities as well as regular and repeated errors which provided a pattern. Based on the pattern identified, the conclusion is that students’ collocation errors are attributable to poor teaching and learning which resulted in wrong generalization of rules.

Keywords: Collocations, errors, collocation errors, second language learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7884
303 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation

Authors: A. Yanik, U. Aldemir

Abstract:

This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.

Keywords: Bridge structures, passive control, seismic, semi-active control, viscous damping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 754
302 Opportunities for Precision Feed in Apiculture for Managing the Efficacy of Feed and Medicine

Authors: John Michael Russo

Abstract:

Honeybees are important to our food system and continue to suffer from high rates of colony loss. Precision feed has brought many benefits to livestock cultivation and these should transfer to apiculture. However, apiculture has unique challenges. The objective of this research is to understand how principles of precision agriculture, applied to apiculture and feed specifically, might effectively improve state-of-the-art cultivation. The methodology surveys apicultural practice to build a model for assessment. First, a review of apicultural motivators is made. Feed method is then evaluated. Finally, precision feed methods are examined as accelerants with potential to advance the effectiveness of feed practice. Six important motivators emerge: colony loss, disease, climate change, site variance, operational costs, and competition. Feed practice itself is used to compensate for environmental variables. The research finds that the current state-of-the-art in apiculture feed focuses on critical challenges in the management of feed schedules which satisfy requirements of the bees, preserve potency, optimize environmental variables, and manage costs. Many of the challenges are most acute when feed is used to dispense medication. Technology such as RNA treatments have even more rigorous demands. Precision feed solutions focus on strategies which accommodate specific needs of individual livestock. A major component is data; they integrate precise data with methods that respond to individual needs. There is enormous opportunity for precision feed to improve apiculture through the integration of precision data with policies to translate data into optimized action in the apiary, particularly through automation.

Keywords: Apiculture, precision apiculture, RNA varroa treatment, honeybee feed applications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207
301 Triple Intercell Bar for Electrometallurgical Processes: A Design to Increase PV Energy Utilization

Authors: Eduardo P. Wiechmann, Jorge A. Henríquez, Pablo E. Aqueveque, Luis G. Muñoz

Abstract:

PV energy prices are declining rapidly. To take advantage of the benefits of those prices and lower the carbon footprint, operational practices must be modified. Undoubtedly, it challenges the electrowinning practice to operate at constant current throughout the day. This work presents a technology that contributes in providing modulation capacity to the electrode current distribution system. This is to raise the day time dc current and lower it at night. The system is a triple intercell bar that operates in current-source mode. The design is a capping board free dogbone type of bar that ensures an operation free of short circuits, hot swapability repairs and improved current balance. This current-source system eliminates the resetting currents circulating in equipotential bars. Twin auxiliary connectors are added to the main connectors providing secure current paths to bypass faulty or impaired contacts. All system conductive elements are positioned over a baseboard offering a large heat sink area to the ventilation of a facility. The system works with lower temperature than a conventional busbar. Of these attributes, the cathode current balance property stands out and is paramount for day/night modulation and the use of photovoltaic energy. A design based on a 3D finite element method model predicting electric and thermal performance under various industrial scenarios is presented. Preliminary results obtained in an electrowinning facility with industrial prototypes are included.

Keywords: Electrowinning, intercell bars, PV energy, current modulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 609
300 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets

Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar

Abstract:

Numerical studies have been carried out using a validated two-dimensional standard k-omega turbulence model for the design optimization of a thrust vector control system using shock induced self-impinging supersonic secondary double jet. Parametric analytical studies have been carried out at different secondary injection locations to identifying the highest unsymmetrical distribution of the main gas flow due to shock waves, which produces a desirable side force more lucratively for vectoring. The results from the parametric studies of the case on hand reveal that the shock induced self-impinging supersonic secondary double jet is more efficient in certain locations at the divergent region of a CD nozzle than a case with supersonic single jet with same mass flow rate. We observed that the best axial location of the self-impinging supersonic secondary double jet nozzle with a given jet interaction angle, built-in to a CD nozzle having area ratio 1.797, is 0.991 times the primary nozzle throat diameter from the throat location. We also observed that the flexible steering is possible after invoking ON/OFF facility to the secondary nozzles for meeting the onboard mission requirements. Through our case studies we concluded that the supersonic self-impinging secondary double jet at predesigned jet interaction angle and location can provide more flexible steering options facilitating with 8.81% higher thrust vectoring efficiency than the conventional supersonic single secondary jet without compromising the payload capability of any supersonic aerospace vehicle.

Keywords: Fluidic thrust vectoring, rocket steering, self-impinging secondary supersonic jet, TVC in aerospace vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2668
299 Computational Identification of Bacterial Communities

Authors: Eleftheria Tzamali, Panayiota Poirazi, Ioannis G. Tollis, Martin Reczko

Abstract:

Stable bacterial polymorphism on a single limiting resource may appear if between the evolved strains metabolic interactions take place that allow the exchange of essential nutrients [8]. Towards an attempt to predict the possible outcome of longrunning evolution experiments, a network based on the metabolic capabilities of homogeneous populations of every single gene knockout strain (nodes) of the bacterium E. coli is reconstructed. Potential metabolic interactions (edges) are allowed only between strains of different metabolic capabilities. Bacterial communities are determined by finding cliques in this network. Growth of the emerged hypothetical bacterial communities is simulated by extending the metabolic flux balance analysis model of Varma et al [2] to embody heterogeneous cell population growth in a mutual environment. Results from aerobic growth on 10 different carbon sources are presented. The upper bounds of the diversity that can emerge from single-cloned populations of E. coli such as the number of strains that appears to metabolically differ from most strains (highly connected nodes), the maximum clique size as well as the number of all the possible communities are determined. Certain single gene deletions are identified to consistently participate in our hypothetical bacterial communities under most environmental conditions implying a pattern of growth-condition- invariant strains with similar metabolic effects. Moreover, evaluation of all the hypothetical bacterial communities under growth on pyruvate reveals heterogeneous populations that can exhibit superior growth performance when compared to the performance of the homogeneous wild-type population.

Keywords: Bacterial polymorphism, clique identification, dynamic FBA, evolution, metabolic interactions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369
298 A Novel, Cost-effective Design to Harness Ocean Energy in the Developing Countries

Authors: S. Ayub, S.N. Danish, S.R. Qureshi

Abstract:

The world's population continues to grow at a quarter of a million people per day, increasing the consumption of energy. This has made the world to face the problem of energy crisis now days. In response to the energy crisis, the principles of renewable energy gained popularity. There are much advancement made in developing the wind and solar energy farms across the world. These energy farms are not enough to meet the energy requirement of world. This has attracted investors to procure new sources of energy to be substituted. Among these sources, extraction of energy from the waves is considered as best option. The world oceans contain enough energy to meet the requirement of world. Significant advancements in design and technology are being made to make waves as a continuous source of energy. One major hurdle in launching wave energy devices in a developing country like Pakistan is the initial cost. A simple, reliable and cost effective wave energy converter (WEC) is required to meet the nation-s energy need. This paper will present a novel design proposed by team SAS for harnessing wave energy. This paper has three major sections. The first section will give a brief and concise view of ocean wave creation, propagation and the energy carried by them. The second section will explain the designing of SAS-2. A gear chain mechanism is used for transferring the energy from the buoy to a rotary generator. The third section will explain the manufacturing of scaled down model for SAS-2 .Many modifications are made in the trouble shooting stage. The design of SAS-2 is simple and very less maintenance is required. SAS-2 is producing electricity at Clifton. The initial cost of SAS-2 is very low. This has proved SAS- 2 as one of the cost effective and reliable source of harnessing wave energy for developing countries.

Keywords: Clean Energy, Wave energy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
297 Interactive Effects in Blended Learning Mode: Exploring Hybrid Data Sources and Iterative Linkages

Authors: Hock Chuan, Lim

Abstract:

This paper presents an approach for identifying interactive effects using Network Science (NS) supported by Social Network Analysis (SNA) techniques. Based on general observations that learning processes and behaviors are shaped by the social relationships and influenced by learning environment, the central idea was to understand both the human and non-human interactive effects for a blended learning mode of delivery of computer science modules. Important findings include (a) the importance of non-human nodes to influence the centrality and transfer; (b) the degree of non-human and human connectivity impacts learning. This project reveals that the NS pattern and connectivity as measured by node relationships offer alternative approach for hypothesis generation and design of qualitative data collection. An iterative process further reinforces the analysis, whereas the experimental simulation option itself is an interesting alternative option, a hybrid combination of both experimental simulation and qualitative data collection presents itself as a promising and viable means to study complex scenario such as blended learning delivery mode. The primary value of this paper lies in the design of the approach for studying interactive effects of human (social nodes) and non-human (learning/study environment, Information and Communication Technologies (ICT) infrastructures nodes) components. In conclusion, this project adds to the understanding and the use of SNA to model and study interactive effects in blended social learning.

Keywords: Blended learning, network science, social learning, social network analysis, study environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 648
296 The Effect of Cooperative Learning on Academic Achievement of Grade Nine Students in Mathematics: The Case of Mettu Secondary and Preparatory School

Authors: Diriba Gemechu, Lamessa Abebe

Abstract:

The aim of this study was to examine the effect of cooperative learning method on student’s academic achievement and on the achievement level over a usual method in teaching different topics of mathematics. The study also examines the perceptions of students towards cooperative learning. Cooperative learning is the instructional strategy in which pairs or small groups of students with different levels of ability work together to accomplish a shared goal. The aim of this cooperation is for students to maximize their own and each other learning, with members striving for joint benefit. The teacher’s role changes from wise on the wise to guide on the side. Cooperative learning due to its influential aspects is the most prevalent teaching-learning technique in the modern world. Therefore the study was conducted in order to examine the effect of cooperative learning on the academic achievement of grade 9 students in Mathematics in case of Mettu secondary school. Two sample sections are randomly selected by which one section served randomly as an experimental and the other as a comparison group. Data gathering instruments are achievement tests and questionnaires. A treatment of STAD method of cooperative learning was provided to the experimental group while the usual method is used in the comparison group. The experiment lasted for one semester. To determine the effect of cooperative learning on the student’s academic achievement, the significance of difference between the scores of groups at 0.05 levels was tested by applying t test. The effect size was calculated to see the strength of the treatment. The student’s perceptions about the method were tested by percentiles of the questionnaires. During data analysis, each group was divided into high and low achievers on basis of their previous Mathematics result. Data analysis revealed that both the experimental and comparison groups were almost equal in Mathematics at the beginning of the experiment. The experimental group out scored significantly than comparison group on posttest. Additionally, the comparison of mean posttest scores of high achievers indicates significant difference between the two groups. The same is true for low achiever students of both groups on posttest. Hence, the result of the study indicates the effectiveness of the method for Mathematics topics as compared to usual method of teaching.

Keywords: Cooperative learning, academic achievement, experimental group, comparison group.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2090
295 Neural Network Evaluation of FRP Strengthened RC Buildings Subjected to Near-Fault Ground Motions having Fling Step

Authors: Alireza Mortezaei, Kimia Mortezaei

Abstract:

Recordings from recent earthquakes have provided evidence that ground motions in the near field of a rupturing fault differ from ordinary ground motions, as they can contain a large energy, or “directivity" pulse. This pulse can cause considerable damage during an earthquake, especially to structures with natural periods close to those of the pulse. Failures of modern engineered structures observed within the near-fault region in recent earthquakes have revealed the vulnerability of existing RC buildings against pulse-type ground motions. This may be due to the fact that these modern structures had been designed primarily using the design spectra of available standards, which have been developed using stochastic processes with relatively long duration that characterizes more distant ground motions. Many recently designed and constructed buildings may therefore require strengthening in order to perform well when subjected to near-fault ground motions. Fiber Reinforced Polymers are considered to be a viable alternative, due to their relatively easy and quick installation, low life cycle costs and zero maintenance requirements. The objective of this paper is to investigate the adequacy of Artificial Neural Networks (ANN) to determine the three dimensional dynamic response of FRP strengthened RC buildings under the near-fault ground motions. For this purpose, one ANN model is proposed to estimate the base shear force, base bending moments and roof displacement of buildings in two directions. A training set of 168 and a validation set of 21 buildings are produced from FEA analysis results of the dynamic response of RC buildings under the near-fault earthquakes. It is demonstrated that the neural network based approach is highly successful in determining the response.

Keywords: Seismic evaluation, FRP, neural network, near-fault ground motion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729
294 Using Daily Light Integral Concept to Construct the Ecological Plant Design Strategy of Urban Landscape

Authors: Chuang-Hung Lin, Cheng-Yuan Hsu, Jia-Yan Lin

Abstract:

It is an indispensible strategy to adopt greenery approach on architectural bases so as to improve ecological habitats, decrease heat-island effect, purify air quality, and relieve surface runoff as well as noise pollution, all of which are done in an attempt to achieve sustainable environment. How we can do with plant design to attain the best visual quality and ideal carbon dioxide fixation depends on whether or not we can appropriately make use of greenery according to the nature of architectural bases. To achieve the goal, it is a need that architects and landscape architects should be provided with sufficient local references. Current greenery studies focus mainly on the heat-island effect of urban with large scale. Most of the architects still rely on people with years of expertise regarding the adoption and disposition of plantation in connection with microclimate scale. Therefore, environmental design, which integrates science and aesthetics, requires fundamental research on landscape environment technology divided from building environment technology. By doing so, we can create mutual benefits between green building and the environment. This issue is extremely important for the greening design of the bases of green buildings in cities and various open spaces. The purpose of this study is to establish plant selection and allocation strategies under different building sunshade levels. Initially, with the shading of sunshine on the greening bases as the starting point, the effects of the shades produced by different building types on the greening strategies were analyzed. Then, by measuring the PAR (photosynthetic active radiation), the relative DLI (daily light integral) was calculated, while the DLI Map was established in order to evaluate the effects of the building shading on the established environmental greening, thereby serving as a reference for plant selection and allocation. The discussion results were to be applied in the evaluation of environment greening of greening buildings and establish the “right plant, right place” design strategy of multi-level ecological greening for application in urban design and landscape design development, as well as the greening criteria to feedback to the eco-city greening buildings.

Keywords: Daily light integral, plant design, urban open space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
293 A Fuzzy MCDM Approach for Health-Care Waste Management

Authors: Mehtap Dursun, E. Ertugrul Karsak, Melis Almula Karadayi

Abstract:

The management of the health-care wastes is one of the most important problems in Istanbul, a city with more than 12 million inhabitants, as it is in most of the developing countries. Negligence in appropriate treatment and final disposal of the healthcare wastes can lead to adverse impacts to public health and to the environment. This paper employs a fuzzy multi-criteria group decision making approach, which is based on the principles of fusion of fuzzy information, 2-tuple linguistic representation model, and technique for order preference by similarity to ideal solution (TOPSIS), to evaluate health-care waste (HCW) treatment alternatives for Istanbul. The evaluation criteria are determined employing nominal group technique (NGT), which is a method of systematically developing a consensus of group opinion. The employed method is apt to manage information assessed using multigranularity linguistic information in a decision making problem with multiple information sources. The decision making framework employs ordered weighted averaging (OWA) operator that encompasses several operators as the aggregation operator since it can implement different aggregation rules by changing the order weights. The aggregation process is based on the unification of information by means of fuzzy sets on a basic linguistic term set (BLTS). Then, the unified information is transformed into linguistic 2-tuples in a way to rectify the problem of loss information of other fuzzy linguistic approaches.

Keywords: Group decision making, health care waste management, multi-criteria decision making, OWA, TOPSIS, 2-tuple linguistic representation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2391
292 Simulation of Complex-Shaped Particle Breakage Using the Discrete Element Method

Authors: Felix Platzer, Eric Fimbinger

Abstract:

In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles  consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore  allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.

Keywords: Bonded particle model (BPM), DEM, filter cake, particle breakage, particle fracture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 383