Search results for: vector density
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4532

Search results for: vector density

2762 Robust Numerical Scheme for Pricing American Options under Jump Diffusion Models

Authors: Salah Alrabeei, Mohammad Yousuf

Abstract:

The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. However, most of the option pricing models have no analytical solution. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, we solve the American option under jump diffusion models by using efficient time-dependent numerical methods. several techniques are integrated to reduced the overcome the computational complexity. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). Partial fraction decomposition technique is applied to rational approximation schemes to overcome the complexity of inverting polynomial of matrices. The proposed method is easy to implement on serial or parallel versions. Numerical results are presented to prove the accuracy and efficiency of the proposed method.

Keywords: integral differential equations, jump–diffusion model, American options, rational approximation

Procedia PDF Downloads 120
2761 Effect of Pioglitazone on Intracellular Na+ Homeostasis in Metabolic Syndrome-Induced Cardiomyopathy in Male Rats

Authors: Ayca Bilginoglu, Belma Turan

Abstract:

Metabolic syndrome, is associated impaired blood glucose level, insulin resistance, dyslipidemia caused by abdominal obesity. Also, it is related with cardiovascular risk accumulation and cardiomyopathy. The hypothesis of this study was to examine the effect of thiazolidinediones such as pioglitazone which is widely used insulin-sensitizing agents that improve glycemic control, on intracellular Na+ homeostasis in metabolic syndrome-induced cardiomyopathy in male rats. Male Wistar-Albino rats were randomly divided into three groups, namely control (Con, n=7), metabolic syndrome (MetS, n=7) and pioglitazone treated metabolic syndrome group (MetS+PGZ, n=7). Metabolic syndrome was induced by providing drinking water that was 32% sucrose, for 18 weeks. All of the animals were exposed to a 12 h light – 12 h dark cycle. Abdominal obesity and glucose intolerance had measured as a marker of metabolic syndrome. Intracellular Na+ ([Na+]i) is an important modulator of excitation–contraction coupling in heart. [Na+]i at rest and [Na+]i during pacing with electrical field stimulation in 0.2 Hz, 0.8 Hz, 2.0 Hz stimulation frequency were recorded in cardiomyocytes. Also, Na+ channel current (INa) density and I-V curve were measured to understand [Na+]i homeostasis. In results, high sucrose intake, as well as the normal daily diet, significantly increased body mass and blood glucose level of the rats in the metabolic syndrome group as compared with the non-treated control group. In MetS+PZG group, the blood glucose level and body inclined to decrease to the Con group. There was a decrease in INa density and there was a shift both activation and inactivation curve of INa. Pioglitazone reversed the shift to the control side. Basal [Na+]i either MetS and Con group were not significantly different, but there was a significantly increase in [Na+]i in stimulated cardiomyocytes in MetS group. Furthermore, pioglitazone had not effect on basal [Na+]i but it reversed the increase in [Na+]i in stimulated cardiomyocytes to the that of Con group. Results of the present study suggest that pioglitazone has a significant effect on the Na+ homeostasis in the metabolic syndrome induced cardiomyopathy in rats. All animal procedures and experiments were approved by the Animal Ethics Committee of Ankara University Faculty of Medicine (2015-2-37).

Keywords: insulin resistance, intracellular sodium, metabolic syndrome, sodium current

Procedia PDF Downloads 285
2760 Characteristics of Sorghum (Sorghum bicolor L. Moench) Flour on the Soaking Time of Peeled Grains and Particle Size Treatment

Authors: Sri Satya Antarlina, Elok Zubaidah, Teti Istiana, Harijono

Abstract:

Sorghum bicolor (Sorghum bicolor L. Moench) has the potential as a flour for gluten-free food products. Sorghum flour production needs grain soaking treatment. Soaking can reduce the tannin content which is an anti-nutrient, so it can increase the protein digestibility. Fine particle size decreases the yield of flour, so it is necessary to study various particle sizes to increase the yield. This study aims to determine the characteristics of sorghum flour in the treatment of soaking peeled grain and particle size. The material of white sorghum varieties KD-4 from farmers in East Java, Indonesia. Factorial randomized factorial design (two factors), repeated three times, factor I were the time of grain soaking (five levels) that were 0, 12, 24, 36, and 48 hours, factor II was the size of the starch particles sifted with a fineness level of 40, 60, 80, and 100 mesh. The method of making sorghum flour is grain peeling, soaking peeled grain, drying using the oven at 60ᵒC, milling, and sieving. Physico-chemical analysis of sorghum flour. The results show that there is an interaction between soaking time of grain with the size of sorghum flour particles. Interaction in yield of flour, L* color (brightness level), whiteness index, paste properties, amylose content, protein content, bulk density, and protein digestibility. The method of making sorghum flour through the soaking of peeled grain and the difference in particle size has an important role in producing the physicochemical properties of the specific flour. Based on the characteristics of sorghum flour produced, it is determined the method of making sorghum flour through sorghum grain soaking for 24 hours, the particle size of flour 80 mesh. The sorghum flour with characteristic were 24.88% yield of flour, 88.60 color L* (brightness level), 69.95 whiteness index, 3615 Cp viscosity, 584.10 g/l of bulk density, 24.27% db protein digestibility, 90.02% db starch content, 23.4% db amylose content, 67.45% db amylopectin content, 0.22% db crude fiber content, 0.037% db tannin content, 5.30% db protein content, ash content 0.18% db, carbohydrate content 92.88 % db, and 1.94% db fat content. The sorghum flour is recommended for cookies products.

Keywords: characteristic, sorghum (Sorghum bicolor L. Moench) flour, grain soaking, particle size, physicochemical properties

Procedia PDF Downloads 162
2759 Discrete Group Search Optimizer for the Travelling Salesman Problem

Authors: Raed Alnajjar, Mohd Zakree, Ahmad Nazri

Abstract:

In this study, we apply Discrete Group Search Optimizer (DGSO) for solving Traveling Salesman Problem (TSP). The DGSO is a nature inspired optimization algorithm that imitates the animal behavior, especially animal searching behavior. The proposed DGSO uses a vector representation and some discrete operators, such as destruction, construction, differential evolution, swap and insert. The TSP is a well-known hard combinatorial optimization problem, which seeks to find the shortest path among numbers of cities. The performance of the proposed DGSO is evaluated and tested on benchmark instances which listed in LIBTSP dataset. The experimental results show that the performance of the proposed DGSO is comparable with the other methods in the state of the art for some instances. The results show that DGSO outperform Ant Colony System (ACS) in some instances whilst outperform other metaheuristic in most instances. In addition to that, the new results obtained a number of optimal solutions and some best known results. DGSO was able to obtain feasible and good quality solution across all dataset.

Keywords: discrete group search optimizer (DGSO); Travelling salesman problem (TSP); Variable neighborhood search(VNS)

Procedia PDF Downloads 324
2758 A Comprehensive Planning Model for Amalgamation of Intensification and Green Infrastructure

Authors: Sara Saboonian, Pierre Filion

Abstract:

The dispersed-suburban model has been the dominant one across North America for the past seventy years, characterized by automobile reliance, low density, and land-use specialization. Two planning models have emerged as possible alternatives to address the ills inflicted by this development pattern. First, there is intensification, which promotes efficient infrastructure by connecting high-density, multi-functional, and walkable nodes with public transit services within the suburban landscape. Second is green infrastructure, which provides environmental health and human well-being by preserving and restoring ecosystem services. This research studies incompatibilities and the possibility of amalgamating the two alternatives in an attempt to develop a comprehensive alternative to suburban model that advocates density, multi-functionality and transit- and pedestrian-conduciveness, with measures capable of mitigating the adverse environmental impacts of compactness. The research investigates three Canadian urban growth centers, where intensification is the current planning practice, and the awareness of green infrastructure benefits is on the rise. However, these three centers are contrasted by their development stage, the presence or absence of protected natural land, their environmental approach, and their adverse environmental consequences according to the planning cannons of different periods. The methods include reviewing the literature on green infrastructure planning, criticizing the Ontario provincial plans for intensification, surveying residents’ preferences for alternative models, and interviewing officials who deal with the local planning for the centers. Moreover, the research draws on recalling debates between New Urbanism and Landscape/Ecological Urbanism. The case studies expose the difficulties in creating urban growth centres that accommodate green infrastructure while adhering to intensification principles. First, the dominant status of intensification and the obstacles confronting intensification have monopolized the planners’ concerns. Second, the tension between green infrastructure and intensification explains the absence of the green infrastructure typologies that correspond to intensification-compatible forms and dynamics. Finally, the lack of highlighted social-economic benefits of green infrastructure reduces residents’ participation. Moreover, the results from the research provide insight into predominating urbanization theories, New Urbanism and Landscape/Ecological Urbanism. In order to understand political, planning, and ecological dynamics of such blending, dexterous context-specific planning is required. Findings suggest the influence of the following factors on amalgamating intensification and green infrastructure. Initially, producing ecosystem services-based justifications for green infrastructure development in the intensification context provides an expert-driven backbone for the implementation programs. This knowledge-base should be translated to effectively imbue different urban stakeholders. Moreover, due to the limited greenfields in intensified areas, spatial distribution and development of multi-level corridors such as pedestrian-hospitable settings and transportation networks along green infrastructure measures are required. Finally, to ensure the long-term integrity of implemented green infrastructure measures, significant investment in public engagement and education, as well as clarification of management responsibilities is essential.

Keywords: ecosystem services, green infrastructure, intensification, planning

Procedia PDF Downloads 355
2757 Reducing the Incidence Rate of Pressure Sore in a Medical Center in Taiwan

Authors: Chang Yu Chuan

Abstract:

Background and Aim: Pressure sore is not only the consequence of any gradual damage of the skin leading to tissue defects but also an important indicator of clinical care. If hospitalized patients develop pressure sores without proper care, it would result in delayed healing, wound infection, increase patient physical pain, prolonged hospital stay and even death, which would have a negative impact on the quality of care and also increase nursing manpower and medical costs. This project is aimed at decreasing the incidence of pressure sore in one ward of internal medicine. Our data showed 53 cases (0.61%) of pressure sore in 2015, which exceeded the average (0.5%) of Taiwan Clinical Performance Indicator (TCPI) for medical centers. The purpose of this project is to reduce the incidence rate of pressure sore in the ward. After data collection and analysis from January to December 2016, the reasons of developing pressure sore were found: 1. Lack of knowledge to prevent pressure among nursing staffs; 2. No relevant courses about preventing pressure ulcers and pressure wound care being held in this unit; 3. Low complete rate of pressure sore care education that family members should receive from nursing staffs; 4. Decompression equipment is not enough; 5. Lack of standard procedures for body-turning and positioning care. After team members brainstorming, several strategies were proposed, including holding in-service education, pressure sore care seed training, purchasing decompression mattress and memory pillows, designing more elements of health education tools, such as health education pamphlet, posters and multimedia films of body-turning and positioning demonstration, formulation and promotion of standard operating procedures. In this way, nursing staffs can understand the body-turning and positioning guidelines for pressure sore prevention and enhance the quality of care. After the implementation of this project, the pressure sore density significantly decreased from 0.61%(53 cases) to 0.45%(28 cases) in this ward. The project shows good results and good example for nurses working at the ward and helps to enhance quality of care.

Keywords: body-turning and positioning, incidence density, nursing, pressure sore

Procedia PDF Downloads 267
2756 Anti-lipidemic and Hematinic Potentials of Moringa Oleifera Leaves: A Clinical Trial on Type 2 Diabetic Subjects in a Rural Nigerian Community

Authors: Ifeoma C. Afiaenyi, Elizabeth K. Ngwu, Rufina N. B. Ayogu

Abstract:

Diabetes has crept into the rural areas of Nigeria, causing devastating effects on its sufferers; most of them could not afford diabetic medications. Moringa oleifera has been used extensively in animal models to demonstrate its antilipidaemic and haematinic qualities; however, there is a scarcity of data on the effect of graded levels of Moringa oleifera leaves on the lipid profile and hematological parameters in human diabetic subjects. The study determined the effect of Moringa oleifera leaves on the lipid profile and hematological parameters of type 2 diabetic subjects in Ukehe, a rural Nigerian community. Twenty-four adult male and female diabetic subjects were purposively selected for the study. These subjects were shared into four groups of six subjects each. The diets used in the study were isocaloric. A control group (diabetics, group 1) was fed diets without Moringa oleifera leaves. Experimental groups 2, 3 and 4 received 20g, 40g and 60g of Moringa oleifera leaves daily, respectively, in addition to the diets. The subjects' lipid profile and hematological parameters were measured prior to the feeding trial and at the end of the feeding trial. The feeding trial lasted for fourteen days. The data obtained were analyzed using the computer program Statistical Product for Service Solution (SPSS) for windows version 21. A Paired-samples t-test was used to compare the means of values collected before and after the feeding trial within the groups and significance was accepted at p < 0.05. There was a non-significant (p > 0.05) decrease in the mean total cholesterol of the subjects in groups 1, 2 and 3 after the feeding trial. There was a non-significant (p > 0.05) decrease in the mean triglyceride levels of the subjects in group 1 after the feeding trial. Groups 1 and 3 subjects had a non-significant (p > 0.05) decrease in their mean low-density lipoprotein (LDL) cholesterol after the feeding trial. Groups 1, 2 and 4 had a significant (p < 0.05) increase in their mean high-density lipoprotein (HDL) cholesterol after the feeding trial. A significant (p < 0.05) decrease in the mean hemoglobin level was observed only in group 4 subjects. Similarly, there was a significant (p < 0.05) decrease in the mean packed cell volume of group 4 subjects. It was only in group 4 that a significant (p < 0.05) decrease in the mean white blood cells of the subjects was also observed. The changes observed in the parameters assessed were not dose-dependent. Therefore, a similar study of longer duration and more samples is imperative to authenticate these results.

Keywords: anemia, diabetic subjects, lipid profile, moringa oleifera

Procedia PDF Downloads 200
2755 Bag of Words Representation Based on Fusing Two Color Local Descriptors and Building Multiple Dictionaries

Authors: Fatma Abdedayem

Abstract:

We propose an extension to the famous method called Bag of words (BOW) which proved a successful role in the field of image categorization. Practically, this method based on representing image with visual words. In this work, firstly, we extract features from images using Spatial Pyramid Representation (SPR) and two dissimilar color descriptors which are opponent-SIFT and transformed-color-SIFT. Secondly, we fuse color local features by joining the two histograms coming from these descriptors. Thirdly, after collecting of all features, we generate multi-dictionaries coming from n random feature subsets that obtained by dividing all features into n random groups. Then, by using these dictionaries separately each image can be represented by n histograms which are lately concatenated horizontally and form the final histogram, that allows to combine Multiple Dictionaries (MDBoW). In the final step, in order to classify image we have applied Support Vector Machine (SVM) on the generated histograms. Experimentally, we have used two dissimilar image datasets in order to test our proposition: Caltech 256 and PASCAL VOC 2007.

Keywords: bag of words (BOW), color descriptors, multi-dictionaries, MDBoW

Procedia PDF Downloads 297
2754 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 135
2753 MindFlow: A Collective Intelligence-Based System for Helping Stress Pattern Diagnosis

Authors: Andres Frederic

Abstract:

We present the MindFlow system supporting the detection and the diagnosis of stresses. The heart of the system is a knowledge synthesis engine allowing occupational health stakeholders (psychologists, occupational therapists and human resource managers) to formulate queries related to stress and responding to users requests by recommending a pattern of stress if one exists. The stress pattern diagnosis is based on expert knowledge stored in the MindFlow stress ontology including stress feature vector. The query processing may involve direct access to the MindFlow system by occupational health stakeholders, online communication between the MindFlow system and the MindFlow domain experts, or direct dialog between a occupational health stakeholder and a MindFlow domain expert. The MindFlow knowledge model is generic in the sense that it supports the needs of psychologists, occupational therapists and human resource managers. The system presented in this paper is currently under development as part of a Dutch-Japanese project and aims to assist organisation in the quick diagnosis of stress patterns.

Keywords: occupational stress, stress management, physiological measurement, accident prevention

Procedia PDF Downloads 430
2752 Adaptive Kaman Filter for Fault Diagnosis of Linear Parameter-Varying Systems

Authors: Rajamani Doraiswami, Lahouari Cheded

Abstract:

Fault diagnosis of Linear Parameter-Varying (LPV) system using an adaptive Kalman filter is proposed. The LPV model is comprised of scheduling parameters, and the emulator parameters. The scheduling parameters are chosen such that they are capable of tracking variations in the system model as a result of changes in the operating regimes. The emulator parameters, on the other hand, simulate variations in the subsystems during the identification phase and have negligible effect during the operational phase. The nominal model and the influence vectors, which are the gradient of the feature vector respect to the emulator parameters, are identified off-line from a number of emulator parameter perturbed experiments. A Kalman filter is designed using the identified nominal model. As the system varies, the Kalman filter model is adapted using the scheduling variables. The residual is employed for fault diagnosis. The proposed scheme is successfully evaluated on simulated system as well as on a physical process control system.

Keywords: identification, linear parameter-varying systems, least-squares estimation, fault diagnosis, Kalman filter, emulators

Procedia PDF Downloads 499
2751 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System

Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes

Abstract:

The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.

Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models

Procedia PDF Downloads 81
2750 Development of Fake News Model Using Machine Learning through Natural Language Processing

Authors: Sajjad Ahmed, Knut Hinkelmann, Flavio Corradini

Abstract:

Fake news detection research is still in the early stage as this is a relatively new phenomenon in the interest raised by society. Machine learning helps to solve complex problems and to build AI systems nowadays and especially in those cases where we have tacit knowledge or the knowledge that is not known. We used machine learning algorithms and for identification of fake news; we applied three classifiers; Passive Aggressive, Naïve Bayes, and Support Vector Machine. Simple classification is not completely correct in fake news detection because classification methods are not specialized for fake news. With the integration of machine learning and text-based processing, we can detect fake news and build classifiers that can classify the news data. Text classification mainly focuses on extracting various features of text and after that incorporating those features into classification. The big challenge in this area is the lack of an efficient way to differentiate between fake and non-fake due to the unavailability of corpora. We applied three different machine learning classifiers on two publicly available datasets. Experimental analysis based on the existing dataset indicates a very encouraging and improved performance.

Keywords: fake news detection, natural language processing, machine learning, classification techniques.

Procedia PDF Downloads 167
2749 Shear Strength Characteristics of Sand Mixed with Particulate Rubber

Authors: Firas Daghistani, Hossam Abuel Naga

Abstract:

Waste tyres is a global problem that has a negative effect on the environment, where there are approximately one billion waste tyres discarded worldwide yearly. Waste tyres are discarded in stockpiles, where they provide harm to the environment in many ways. Finding applications to these materials can help in reducing this global problem. One of these applications is recycling these waste materials and using them in geotechnical engineering. Recycled waste tyre particulates can be mixed with sand to form a lightweight material with varying shear strength characteristics. Contradicting results were found in the literature on the inclusion of particulate rubber to sand, where some experiments found that the inclusion of particulate rubber can increase the shear strength of the mixture, while other experiments stated that the addition of particulate rubber decreases the shear strength of the mixture. This research further investigates the inclusion of particulate rubber to sand and whether it can increase or decrease the shear strength characteristics of the mixture. For the experiment, a series of direct shear tests were performed on a poorly graded sand with a mean particle size of 0.32 mm mixed with recycled poorly graded particulate rubber with a mean particle size of 0.51 mm. The shear tests were performedon four normal stresses 30, 55, 105, 200 kPa at a shear rate of 1 mm/minute. Different percentages ofparticulate rubber content were used in the mixture i.e., 10%, 20%, 30% and 50% of sand dry weight at three density states, namely loose, slight dense, and dense state. The size ratio of the mixture,which is the mean particle size of the particulate rubber divided by the mean particle size of the sand, was 1.59. The results identified multiple parameters that can influence the shear strength of the mixture. The parameters were: normal stress, particulate rubber content, mixture gradation, mixture size ratio, and the mixture’s density. The inclusion of particulate rubber tosand showed a decrease to the internal friction angle and an increase to the apparent cohesion. Overall, the inclusion of particulate rubber did not have a significant influenceon the shear strength of the mixture. For all the dense states at the low normal stresses 33 and 55 kPa, the inclusion of particulate rubber showed aslight increase in the shear strength where the peak was at 20% rubber content of the sand’s dry weight. On the other hand, at the high normal stresses 105, and 200 kPa, there was a slight decrease in the shear strength.

Keywords: shear strength, direct shear, sand-rubber mixture, waste material, granular material

Procedia PDF Downloads 132
2748 Metallic-Diamond Tools with Increased Abrasive Wear Resistance for Grinding Industrial Floor Systems

Authors: Elżbieta Cygan, Bączek, Piotr Wyżga

Abstract:

This paper presents the results of research on the physical, mechanical, and tribological properties of materials constituting the matrix in sintered metallic-diamond tools. The ground powders based on the Fe-Mn-Cu-Sn-C system were modified with micro-sized particles of the ceramic phase: SiC, Al₂O₃ and consolidated using the SPS (spark plasma sintering) method to a relative density of over 98% at 850-950°C, at a pressure of 35 MPa and time 10 min. After sintering, an analysis of the microstructure was conducted using scanning electron microscopy. The resulting materials were tested for the apparent density determined by Archimedes’ method, Rockwell hardness (scale B), Young’s modulus, as well as for technological properties. The performance results of obtained diamond composites were compared with the base material (Fe–Mn–Cu–Sn–C) and the commercial alloy Co-20% WC. The hardness of composites has achieved the maximum at a temperature of 900°C; therefore, it should be considered that at this temperature it was obtained optimal physical and mechanical properties of the subjects' composites were. Research on tribological properties showed that the composites modified with micro-sized particles of the ceramic phase are characterized by more than twice higher wear resistance in comparison with base materials and the commercial alloy Co-20% WC. Composites containing Al₂O₃ phase particles in the matrix material were composites containing Al₂O₃ phase particles in the matrix material were characterized by the lowest abrasion wear resistance. The manufacturing technology presented in the paper is economically justified and can be successfully used in the production process of the matrix in sintered diamond-impregnated tools used for the machining of an industrial floor system. Acknowledgment: The study was performed under LIDER IX Research Project No. LIDER/22/0085/L-9/17/NCBR/2018 entitled “Innovative metal-diamond tools without the addition of critical raw materials for applications in the process of grinding industrial floor systems” funded by the National Centre for Research and Development of Poland, Warsaw.

Keywords: abrasive wear resistance, metal matrix composites, sintered diamond tools, Spark Plasma Sintering

Procedia PDF Downloads 78
2747 Assessment of Seeding and Weeding Field Robot Performance

Authors: Victor Bloch, Eerikki Kaila, Reetta Palva

Abstract:

Field robots are an important tool for enhancing efficiency and decreasing the climatic impact of food production. There exists a number of commercial field robots; however, since this technology is still new, the robot advantages and limitations, as well as methods for optimal using of robots, are still unclear. In this study, the performance of a commercial field robot for seeding and weeding was assessed. A research 2-ha sugar beet field with 0.5m row width was used for testing, which included robotic sowing of sugar beet and weeding five times during the first two months of the growing. About three and five percent of the field were used as untreated and chemically weeded control areas, respectively. The plant detection was based on the exact plant location without image processing. The robot was equipped with six seeding and weeding tools, including passive between-rows harrow hoes and active hoes cutting inside rows between the plants, and it moved with a maximal speed of 0.9 km/h. The robot's performance was assessed by image processing. The field images were collected by an action camera with a height of 2 m and a resolution 27M pixels installed on the robot and by a drone with a 16M pixel camera flying at 4 m height. To detect plants and weeds, the YOLO model was trained with transfer learning from two available datasets. A preliminary analysis of the entire field showed that in the areas treated by the robot, the weed average density varied across the field from 6.8 to 9.1 weeds/m² (compared with 0.8 in the chemically treated area and 24.3 in the untreated area), the weed average density inside rows was 2.0-2.9 weeds / m (compared with 0 on the chemically treated area), and the emergence rate was 90-95%. The information about the robot's performance has high importance for the application of robotics for field tasks. With the help of the developed method, the performance can be assessed several times during the growth according to the robotic weeding frequency. When it’s used by farmers, they can know the field condition and efficiency of the robotic treatment all over the field. Farmers and researchers could develop optimal strategies for using the robot, such as seeding and weeding timing, robot settings, and plant and field parameters and geometry. The robot producers can have quantitative information from an actual working environment and improve the robots accordingly.

Keywords: agricultural robot, field robot, plant detection, robot performance

Procedia PDF Downloads 87
2746 Electrical Tortuosity across Electrokinetically Remediated Soils

Authors: Waddah S. Abdullah, Khaled F. Al-Omari

Abstract:

Electrokinetic remediation is one of the most influential and effective methods to decontaminate contaminated soils. Electroosmosis and electromigration are the processes of electrochemical extraction of contaminants from soils. The driving force that causes removing contaminants from soils (electroosmosis process or electromigration process) is voltage gradient. Therefore, the electric field distribution throughout the soil domain is extremely important to investigate and to determine the factors that help to establish a uniform electric field distribution in order to make the clean-up process work properly and efficiently. In this study, small-sized passive electrodes (made of graphite) were placed at predetermined locations within the soil specimen, and the voltage drop between these passive electrodes was measured in order to observe the electrical distribution throughout the tested soil specimens. The electrokinetic test was conducted on two types of soils; a sandy soil and a clayey soil. The electrical distribution throughout the soil domain was conducted with different tests properties; and the electrical field distribution was observed in three-dimensional pattern in order to establish the electrical distribution within the soil domain. The effects of density, applied voltages, and degree of saturation on the electrical distribution within the remediated soil were investigated. The distribution of the moisture content, concentration of the sodium ions, and the concentration of the calcium ions were determined and established in three-dimensional scheme. The study has shown that the electrical conductivity within soil domain depends on the moisture content and concentration of electrolytes present in the pore fluid. The distribution of the electrical field in the saturated soil was found not be affected by its density. The study has also shown that high voltage gradient leads to non-uniform electric field distribution within the electroremediated soil. Very importantly, it was found that even when the electric field distribution is uniform globally (i.e. between the passive electrodes), local non-uniformity could be established within the remediated soil mass. Cracks or air gaps formed due to temperature rise (because of electric flow in low conductivity regions) promotes electrical tortuosity. Thus, fracturing or cracking formed in the remediated soil mass causes disconnection of electric current and hence, no removal of contaminant occur within these areas.

Keywords: contaminant removal, electrical tortuousity, electromigration, electroosmosis, voltage distribution

Procedia PDF Downloads 421
2745 Observation of Inverse Blech Length Effect during Electromigration of Cu Thin Film

Authors: Nalla Somaiah, Praveen Kumar

Abstract:

Scaling of transistors and, hence, interconnects is very important for the enhanced performance of microelectronic devices. Scaling of devices creates significant complexity, especially in the multilevel interconnect architectures, wherein current crowding occurs at the corners of interconnects. Such a current crowding creates hot-spots at the respective corners, resulting in non-uniform temperature distribution in the interconnect as well. This non-uniform temperature distribution, which is exuberated with continued scaling of devices, creates a temperature gradient in the interconnect. In particular, the increased current density at corners and the associated temperature rise due to Joule heating accelerate the electromigration induced failures in interconnects, especially at corners. This has been the classic reliability issue associated with metallic interconnects. Herein, it is generally understood that electromigration induced damages can be avoided if the length of interconnect is smaller than a critical length, often termed as Blech length. Interestingly, the effect of non-negligible temperature gradients generated at these corners in terms of thermomigration and electromigration-thermomigration coupling has not attracted enough attention. Accordingly, in this work, the interplay between the electromigration and temperature gradient induced mass transport was studied using standard Blech structure. In this particular sample structure, the majority of the current is forcefully directed into the low resistivity metallic film from a high resistivity underlayer film, resulting in current crowding at the edges of the metallic film. In this study, 150 nm thick Cu metallic film was deposited on 30 nm thick W underlayer film in the configuration of Blech structure. Series of Cu thin strips, with lengths of 10, 20, 50, 100, 150 and 200 μm, were fabricated. Current density of ≈ 4 × 1010 A/m² was passed through Cu and W films at a temperature of 250ºC. Herein, along with expected forward migration of Cu atoms from the cathode to the anode at the cathode end of the Cu film, backward migration from the anode towards the center of Cu film was also observed. Interestingly, smaller length samples consistently showed enhanced migration at the cathode end, thus indicating the existence of inverse Blech length effect in presence of temperature gradient. A finite element based model showing the interplay between electromigration and thermomigration driving forces has been developed to explain this observation.

Keywords: Blech structure, electromigration, temperature gradient, thin films

Procedia PDF Downloads 257
2744 Autonomous Strategic Aircraft Deconfliction in a Multi-Vehicle Low Altitude Urban Environment

Authors: Loyd R. Hook, Maryam Moharek

Abstract:

With the envisioned future growth of low altitude urban aircraft operations for airborne delivery service and advanced air mobility, strategies to coordinate and deconflict aircraft flight paths must be prioritized. Autonomous coordination and planning of flight trajectories is the preferred approach to the future vision in order to increase safety, density, and efficiency over manual methods employed today. Difficulties arise because any conflict resolution must be constrained by all other aircraft, all airspace restrictions, and all ground-based obstacles in the vicinity. These considerations make pair-wise tactical deconfliction difficult at best and unlikely to find a suitable solution for the entire system of vehicles. In addition, more traditional methods which rely on long time scales and large protected zones will artificially limit vehicle density and drastically decrease efficiency. Instead, strategic planning, which is able to respond to highly dynamic conditions and still account for high density operations, will be required to coordinate multiple vehicles in the highly constrained low altitude urban environment. This paper develops and evaluates such a planning algorithm which can be implemented autonomously across multiple aircraft and situations. Data from this evaluation provide promising results with simulations showing up to 10 aircraft deconflicted through a relatively narrow low-altitude urban canyon without any vehicle to vehicle or obstacle conflict. The algorithm achieves this level of coordination beginning with the assumption that each vehicle is controlled to follow an independently constructed flight path, which is itself free of obstacle conflict and restricted airspace. Then, by preferencing speed change deconfliction maneuvers constrained by the vehicles flight envelope, vehicles can remain as close to the original planned path and prevent cascading vehicle to vehicle conflicts. Performing the search for a set of commands which can simultaneously ensure separation for each pair-wise aircraft interaction and optimize the total velocities of all the aircraft is further complicated by the fact that each aircraft's flight plan could contain multiple segments. This means that relative velocities will change when any aircraft achieves a waypoint and changes course. Additionally, the timing of when that aircraft will achieve a waypoint (or, more directly, the order upon which all of the aircraft will achieve their respective waypoints) will change with the commanded speed. Put all together, the continuous relative velocity of each vehicle pair and the discretized change in relative velocity at waypoints resembles a hybrid reachability problem - a form of control reachability. This paper proposes two methods for finding solutions to these multi-body problems. First, an analytical formulation of the continuous problem is developed with an exhaustive search of the combined state space. However, because of computational complexity, this technique is only computable for pairwise interactions. For more complicated scenarios, including the proposed 10 vehicle example, a discretized search space is used, and a depth-first search with early stopping is employed to find the first solution that solves the constraints.

Keywords: strategic planning, autonomous, aircraft, deconfliction

Procedia PDF Downloads 95
2743 Temperature Dependence of Relative Permittivity: A Measurement Technique Using Split Ring Resonators

Authors: Sreedevi P. Chakyar, Jolly Andrews, V. P. Joseph

Abstract:

A compact method for measuring the relative permittivity of a dielectric material at different temperatures using a single circular Split Ring Resonator (SRR) metamaterial unit working as a test probe is presented in this paper. The dielectric constant of a material is dependent upon its temperature and the LC resonance of the SRR depends on its dielectric environment. Hence, the temperature of the dielectric material in contact with the resonator influences its resonant frequency. A single SRR placed between transmitting and receiving probes connected to a Vector Network Analyser (VNA) is used as a test probe. The dependence of temperature between 30 oC and 60 oC on resonant frequency of SRR is analysed. Relative permittivities ‘ε’ of test samples for different temperatures are extracted from a calibration graph drawn between the relative permittivity of samples of known dielectric constant and their corresponding resonant frequencies. This method is found to be an easy and efficient technique for analysing the temperature dependent permittivity of different materials.

Keywords: metamaterials, negative permeability, permittivity measurement techniques, split ring resonators, temperature dependent dielectric constant

Procedia PDF Downloads 412
2742 Generalized Approach to Linear Data Transformation

Authors: Abhijith Asok

Abstract:

This paper presents a generalized approach for the simple linear data transformation, Y=bX, through an integration of multidimensional coordinate geometry, vector space theory and polygonal geometry. The scaling is performed by adding an additional ’Dummy Dimension’ to the n-dimensional data, which helps plot two dimensional component-wise straight lines on pairs of dimensions. The end result is a set of scaled extensions of observations in any of the 2n spatial divisions, where n is the total number of applicable dimensions/dataset variables, created by shifting the n-dimensional plane along the ’Dummy Axis’. The derived scaling factor was found to be dependent on the coordinates of the common point of origin for diverging straight lines and the plane of extension, chosen on and perpendicular to the ’Dummy Axis’, respectively. This result indicates the geometrical interpretation of a linear data transformation and hence, opportunities for a more informed choice of the factor ’b’, based on a better choice of these coordinate values. The paper follows on to identify the effect of this transformation on certain popular distance metrics, wherein for many, the distance metric retained the same scaling factor as that of the features.

Keywords: data transformation, dummy dimension, linear transformation, scaling

Procedia PDF Downloads 298
2741 A Review on Comparative Analysis of Path Planning and Collision Avoidance Algorithms

Authors: Divya Agarwal, Pushpendra S. Bharti

Abstract:

Autonomous mobile robots (AMR) are expected as smart tools for operations in every automation industry. Path planning and obstacle avoidance is the backbone of AMR as robots have to reach their goal location avoiding obstacles while traversing through optimized path defined according to some criteria such as distance, time or energy. Path planning can be classified into global and local path planning where environmental information is known and unknown/partially known, respectively. A number of sensors are used for data collection. A number of algorithms such as artificial potential field (APF), rapidly exploring random trees (RRT), bidirectional RRT, Fuzzy approach, Purepursuit, A* algorithm, vector field histogram (VFH) and modified local path planning algorithm, etc. have been used in the last three decades for path planning and obstacle avoidance for AMR. This paper makes an attempt to review some of the path planning and obstacle avoidance algorithms used in the field of AMR. The review includes comparative analysis of simulation and mathematical computations of path planning and obstacle avoidance algorithms using MATLAB 2018a. From the review, it could be concluded that different algorithms may complete the same task (i.e. with a different set of instructions) in less or more time, space, effort, etc.

Keywords: path planning, obstacle avoidance, autonomous mobile robots, algorithms

Procedia PDF Downloads 232
2740 Feature Weighting Comparison Based on Clustering Centers in the Detection of Diabetic Retinopathy

Authors: Kemal Polat

Abstract:

In this paper, three feature weighting methods have been used to improve the classification performance of diabetic retinopathy (DR). To classify the diabetic retinopathy, features extracted from the output of several retinal image processing algorithms, such as image-level, lesion-specific and anatomical components, have been used and fed them into the classifier algorithms. The dataset used in this study has been taken from University of California, Irvine (UCI) machine learning repository. Feature weighting methods including the fuzzy c-means clustering based feature weighting, subtractive clustering based feature weighting, and Gaussian mixture clustering based feature weighting, have been used and compered with each other in the classification of DR. After feature weighting, five different classifier algorithms comprising multi-layer perceptron (MLP), k- nearest neighbor (k-NN), decision tree, support vector machine (SVM), and Naïve Bayes have been used. The hybrid method based on combination of subtractive clustering based feature weighting and decision tree classifier has been obtained the classification accuracy of 100% in the screening of DR. These results have demonstrated that the proposed hybrid scheme is very promising in the medical data set classification.

Keywords: machine learning, data weighting, classification, data mining

Procedia PDF Downloads 326
2739 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 137
2738 Improved Mutual Inductance of Rogowski Coil Using Hexagonal Core

Authors: S. Al-Sowayan

Abstract:

Rogowski coils are increasingly used for measurement of AC and transient electric currents. Mostly used Rogowski coils now are with circular or rectangular cores. In order to increase the sensitivity of the measurement of Rogowski coil and perform smooth wire winding, this paper studies the effect of increasing the mutual inductance in order to increase the coil sensitivity by presenting the calculation and simulation of a Rogowski coil with equilateral hexagonal shaped core and comparing the resulted mutual inductance with commonly used core shapes.

Keywords: Rogowski coil, mutual inductance, magnetic flux density, communication engineering

Procedia PDF Downloads 370
2737 CeO₂-Decorated Graphene-coated Nickel Foam with NiCo Layered Double Hydroxide for Efficient Hydrogen Evolution Reaction

Authors: Renzhi Qi, Zhaoping Zhong

Abstract:

Under the dual pressure of the global energy crisis and environmental pollution, avoiding the consumption of non-renewable fossil fuels based on carbon as the energy carrier and developing and utilizing non-carbon energy carriers are the basic requirements for the future new energy economy. Electrocatalyst for water splitting plays an important role in building sustainable and environmentally friendly energy conversion. The oxygen evolution reaction (OER) is essentially limited by the slow kinetics of multi-step proton-electron transfer, which limits the efficiency and cost of water splitting. In this work, CeO₂@NiCo-NRGO/NF hybrid materials were prepared using nickel foam (NF) and nitrogen-doped reduced graphene oxide (NRGO) as conductive substrates by multi-step hydrothermal method and were used as highly efficient catalysts for OER. The well-connected nanosheet array forms a three-dimensional (3D) network on the substrate, providing a large electrochemical surface area with abundant catalytic active sites. The doping of CeO₂ in NiCo-NRGO/NF electrocatalysts promotes the dispersion of substances and its synergistic effect in promoting the activation of reactants, which is crucial for improving its catalytic performance against OER. The results indicate that CeO₂@NiCo-NRGO/NF only requires a lower overpotential of 250 mV to drive the current density of 10 mA cm-2 for an OER reaction of 1 M KOH, and exhibits excellent stability at this current density for more than 10 hours. The double layer capacitance (Cdl) values show that CeO₂@NiCo-NRGO/NF significantly affects the interfacial conductivity and electrochemically active surface area. The hybrid structure could promote the catalytic performance of oxygen evolution reaction, such as low initial potential, high electrical activity, and excellent long-term durability. The strategy for improving the catalytic activity of NiCo-LDH can be used to develop a variety of other electrocatalysts for water splitting.

Keywords: CeO₂, reduced graphene oxide, NiCo-layered double hydroxide, oxygen evolution reaction

Procedia PDF Downloads 82
2736 Developing Allometric Equations for More Accurate Aboveground Biomass and Carbon Estimation in Secondary Evergreen Forests, Thailand

Authors: Titinan Pothong, Prasit Wangpakapattanawong, Stephen Elliott

Abstract:

Shifting cultivation is an indigenous agricultural practice among upland people and has long been one of the major land-use systems in Southeast Asia. As a result, fallows and secondary forests have come to cover a large part of the region. However, they are increasingly being replaced by monocultures, such as corn cultivation. This is believed to be a main driver of deforestation and forest degradation, and one of the reasons behind the recurring winter smog crisis in Thailand and around Southeast Asia. Accurate biomass estimation of trees is important to quantify valuable carbon stocks and changes to these stocks in case of land use change. However, presently, Thailand lacks proper tools and optimal equations to quantify its carbon stocks, especially for secondary evergreen forests, including fallow areas after shifting cultivation and smaller trees with a diameter at breast height (DBH) of less than 5 cm. Developing new allometric equations to estimate biomass is urgently needed to accurately estimate and manage carbon storage in tropical secondary forests. This study established new equations using a destructive method at three study sites: approximately 50-year-old secondary forest, 4-year-old fallow, and 7-year-old fallow. Tree biomass was collected by harvesting 136 individual trees (including coppiced trees) from 23 species, with a DBH ranging from 1 to 31 cm. Oven-dried samples were sent for carbon analysis. Wood density was calculated from disk samples and samples collected with an increment borer from 79 species, including 35 species currently missing from the Global Wood Densities database. Several models were developed, showing that aboveground biomass (AGB) was strongly related to DBH, height (H), and wood density (WD). Including WD in the model was found to improve the accuracy of the AGB estimation. This study provides insights for reforestation management, and can be used to prepare baseline data for Thailand’s carbon stocks for the REDD+ and other carbon trading schemes. These may provide monetary incentives to stop illegal logging and deforestation for monoculture.

Keywords: aboveground biomass, allometric equation, carbon stock, secondary forest

Procedia PDF Downloads 284
2735 Dielectric Properties of La2MoO6 Ceramics at Microwave Frequency

Authors: Yih-Chien Chen, Yu-Cheng You

Abstract:

The microwave dielectric properties of La2MoO6 ceramics were investigated with a view to their application in mobile communication. La2MoO6 ceramics were prepared by the conventional solid-state method with various sintering conditions. The X-ray diffraction peaks of La2MoO6 ceramic did not vary significantly with sintering conditions. The average grain size of La2MoO6 ceramics increased as the temperature and time of sintering increased. A maximum density of 5.67 g/cm3, a dielectric constants (εr) of 14.1, a quality factor (Q×f) of 68,000 GHz, and a temperature coefficient of resonant frequency (τf) of -56 ppm/℃ were obtained when La2MoO6 ceramics that were sintered at 1300 ℃ for 4h.

Keywords: ceramics, sintering, microwave dielectric properties, La2MoO6

Procedia PDF Downloads 291
2734 The Examination of Cement Effect on Isotropic Sands during Static, Dynamic, Melting and Freezing Cycles

Authors: Mehdi Shekarbeigi

Abstract:

The consolidation of loose substrates as well as substrate layers through promoting stabilizing materials is one of the most commonly used road construction techniques. Cement, lime, and flax, as well as asphalt emulsion, are common materials used for soil stabilization to enhance the soil’s strength and durability properties. Cement could be simply used to stabilize permeable materials such as sand in a relatively short time threshold. In this research, typical Portland cement is selected for the stabilization of isotropic sand; the effect of static and cyclic loading on the behavior of these soils has been examined with various percentages of Portland cement. Thus, firstly, a soil’s general features are investigated, and then static tests, including direct cutting, density and single axis tests, and California Bearing Ratio, are performed on the samples. After that, the dynamic behavior of cement on silica sand with the same grain size is analyzed. These experiments are conducted on cement samples of 3, 6, and 9 of the same rates and ineffective limiting pressures of 0 to 1200 kPa with 200 kPa steps of the face according to American Society for Testing and Materials D 3999 standards. Also, to test the effect of temperature on molds and frost samples, 0, 5, 10, and 20 are carried out during 0, 5, 10, and 20-second periods. Results of the static tests showed that increasing the cement percentage increases the soil density and shear strength. The single-axis compressive strength increase is higher for samples with higher cement content and lower densities. The results also illustrate the relationship between single-axial compressive strength and cement weight parameters. Results of the dynamic experiments indicate that increasing the number of loading cycles and melting and freezing cycles enhances permeability and decreases the applied pressure. According to the results of this research, it could be stated that samples containing 9% cement have the highest amount of shear modulus and, therefore, decrease the permeability of soil. This amount could be considered as the optimal amount. Also, the enhancement of effective limited pressure from 400 to 800kPa increased the shear modulus of the sample by an average of 20 to 30 percent in small strains.

Keywords: cement, isotropic sands, static load, three-axis cycle, melting and freezing cycles

Procedia PDF Downloads 76
2733 Comparative Analysis of Predictive Models for Customer Churn Prediction in the Telecommunication Industry

Authors: Deepika Christopher, Garima Anand

Abstract:

To determine the best model for churn prediction in the telecom industry, this paper compares 11 machine learning algorithms, namely Logistic Regression, Support Vector Machine, Random Forest, Decision Tree, XGBoost, LightGBM, Cat Boost, AdaBoost, Extra Trees, Deep Neural Network, and Hybrid Model (MLPClassifier). It also aims to pinpoint the top three factors that lead to customer churn and conducts customer segmentation to identify vulnerable groups. According to the data, the Logistic Regression model performs the best, with an F1 score of 0.6215, 81.76% accuracy, 68.95% precision, and 56.57% recall. The top three attributes that cause churn are found to be tenure, Internet Service Fiber optic, and Internet Service DSL; conversely, the top three models in this article that perform the best are Logistic Regression, Deep Neural Network, and AdaBoost. The K means algorithm is applied to establish and analyze four different customer clusters. This study has effectively identified customers that are at risk of churn and may be utilized to develop and execute strategies that lower customer attrition.

Keywords: attrition, retention, predictive modeling, customer segmentation, telecommunications

Procedia PDF Downloads 57