Search results for: linear code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4599

Search results for: linear code

1569 Causes and Impacts of Marine Heatwaves in the Bay of Bengal Region in the Recent Period

Authors: Sudhanshu Kumar, Raghvendra Chandrakar, Arun Chakraborty

Abstract:

In the ocean, the temperature extremes have the potential to devastate marine habitats, ecosystems together with ensuing socioeconomic consequences. In recent years, these extreme events are more frequent and intense globally and their increasing trend is expected to continue in the upcoming decades. It recently attracted public interest, as well as scientific researchers, which motivates us to analyze the current marine heatwave (MHW) events in the Bay of Bengal region. we have isolated 107 MHW events (above 90th percentile threshold) in this region of the Indian Ocean and investigated the variation in duration, intensity, and frequency of MHW events during our test period (1982-2021). Our study reveals that in the study region the average of three MHW events per year with an increasing linear trend of 1.11 MHW events per decade. In the analysis, we found the longest MHW event which lasted about 99 days, which is far greater than an average MHW event duration. The maximum intensity was 5.29°C (above the climatology-mean), while the mean intensity was 2.03°C. In addition, we observed net heat flux accompanied by anticyclonic eddies to be the primary cause of these events. Moreover, we concluded that these events affect sea surface height and oceanic productivity, highlighting the adverse impact of MHWs on marine ecosystems.

Keywords: marine heatwaves, global warming, climate change, sea surface temperature, marine ecosystem

Procedia PDF Downloads 120
1568 Development of 3D Neck Muscle to Analyze the Effect of Active Muscle Contraction in Whiplash Injury

Authors: Nisha Nandlal Sharma, Julaluk Carmai, Saiprasit Koetniyom, Bernd Markert

Abstract:

Whiplash Injuries are mostly experienced in car accidents. Symptoms of whiplash are commonly reported in studies, neck pain and headaches are two most common symptoms observed. The whiplash Injury mechanism is poorly understood. In present study, hybrid neck muscle model were developed with a combination of solid tetrahedral elements and 1D beam elements. Solid tetrahedral elements represents passive part of the muscle whereas, 1D beam elements represents active part. To simulate the active behavior of the muscle, Hill-type muscle model was applied to beam elements. To simulate non-linear passive properties of muscle, solid elements were modeled with rubber/foam material model. Some important muscles were then inserted into THUMS (Total Human Model for Safety) THUMS was given a boundary conditions similar to experimental tests. The model was exposed to 4g and 7g rear impacts as these load impacts are close to low speed impacts causing whiplash. The effect of muscle activation level on occupant kinematics during whiplash was analyzed.

Keywords: finite element model, muscle activation, THUMS, whiplash injury mechanism

Procedia PDF Downloads 329
1567 Optimal Evaluation of Weather Risk Insurance for Wheat

Authors: Slim Amami

Abstract:

A model is developed to prevent the risks related to climate conditions in the agricultural sector. It will determine the yearly optimum premium to be paid by a farmer in order to reach his required turnover. The model is mainly based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, main ones of which are daily average sunlight, rainfall and temperature. By a simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is deduced from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. Optimal premium is then deduced, and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect their harvest. The application to wheat production in the French Oise department illustrates the reliability of the present model with as low as 6% difference between predicted and real data. The model can be adapted to almost every agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, database, meteorological factors, production model, optimal price

Procedia PDF Downloads 218
1566 Thermodynamic Analysis and Experimental Study of Agricultural Waste Plasma Processing

Authors: V. E. Messerle, A. B. Ustimenko, O. A. Lavrichshev

Abstract:

A large amount of manure and its irrational use negatively affect the environment. As compared with biomass fermentation, plasma processing of manure enhances makes it possible to intensify the process of obtaining fuel gas, which consists mainly of synthesis gas (CO + H₂), and increase plant productivity by 150–200 times. This is achieved due to the high temperature in the plasma reactor and a multiple reduction in waste processing time. This paper examines the plasma processing of biomass using the example of dried mixed animal manure (dung with a moisture content of 30%). Characteristic composition of dung, wt.%: Н₂О – 30, С – 29.07, Н – 4.06, О – 32.08, S – 0.26, N – 1.22, P₂O₅ – 0.61, K₂O – 1.47, СаО – 0.86, MgO – 0.37. The thermodynamic code TERRA was used to numerically analyze dung plasma gasification and pyrolysis. Plasma gasification and pyrolysis of dung were analyzed in the temperature range 300–3,000 K and pressure 0.1 MPa for the following thermodynamic systems: 100% dung + 25% air (plasma gasification) and 100% dung + 25% nitrogen (plasma pyrolysis). Calculations were conducted to determine the composition of the gas phase, the degree of carbon gasification, and the specific energy consumption of the processes. At an optimum temperature of 1,500 K, which provides both complete gasification of dung carbon and the maximum yield of combustible components (99.4 vol.% during dung gasification and 99.5 vol.% during pyrolysis), and decomposition of toxic compounds of furan, dioxin, and benz(a)pyrene, the following composition of combustible gas was obtained, vol.%: СО – 29.6, Н₂ – 35.6, СО₂ – 5.7, N₂ – 10.6, H₂O – 17.9 (gasification) and СО – 30.2, Н₂ – 38.3, СО₂ – 4.1, N₂ – 13.3, H₂O – 13.6 (pyrolysis). The specific energy consumption of gasification and pyrolysis of dung at 1,500 K is 1.28 and 1.33 kWh/kg, respectively. An installation with a DC plasma torch with a rated power of 100 kW and a plasma reactor with a dung capacity of 50 kg/h was used for dung processing experiments. The dung was gasified in an air (or nitrogen during pyrolysis) plasma jet, which provided a mass-average temperature in the reactor volume of at least 1,600 K. The organic part of the dung was gasified, and the inorganic part of the waste was melted. For pyrolysis and gasification of dung, the specific energy consumption was 1.5 kWh/kg and 1.4 kWh/kg, respectively. The maximum temperature in the reactor reached 1,887 K. At the outlet of the reactor, a gas of the following composition was obtained, vol.%: СO – 25.9, H₂ – 32.9, СO₂ – 3.5, N₂ – 37.3 (pyrolysis in nitrogen plasma); СO – 32.6, H₂ – 24.1, СO₂ – 5.7, N₂ – 35.8 (air plasma gasification). The specific heat of combustion of the combustible gas formed during pyrolysis and plasma-air gasification of agricultural waste is 10,500 and 10,340 kJ/kg, respectively. Comparison of the integral indicators of dung plasma processing showed satisfactory agreement between the calculation and experiment.

Keywords: agricultural waste, experiment, plasma gasification, thermodynamic calculation

Procedia PDF Downloads 35
1565 Continuous Blood Pressure Measurement from Pulse Transit Time Techniques

Authors: Chien-Lin Wang, Cha-Ling Ko, Tainsong Chen

Abstract:

Pulse Blood pressure (BP) is one of the vital signs, and is an index that helps determining the stability of life. In this respect, some spinal cord injury patients need to take the tilt table test. While doing the test, the posture changes abruptly, and may cause a patient’s BP to change abnormally. This may cause patients to feel discomfort, and even feel as though their life is threatened. Therefore, if a continuous non-invasive BP assessment system were built, it could help to alert health care professionals in the process of rehabilitation when the BP value is out of range. In our research, BP assessed by the pulse transit time technique was developed. In the system, we use a self-made photoplethysmograph (PPG) sensor and filter circuit to detect two PPG signals and to calculate the time difference. The BP can immediately be assessed by the trend line. According to the results of this study, the relationship between the systolic BP and PTT has a highly negative linear correlation (R2=0.8). Further, we used the trend line to assess the value of the BP and compared it to a commercial sphygmomanometer (Omron MX3); the error rate of the system was found to be in the range of ±10%, which is within the permissible error range of a commercial sphygmomanometer. The continue blood pressure measurement from pulse transit time technique may have potential to become a convenience method for clinical rehabilitation.

Keywords: continous blood pressure measurement, PPG, time transit time, transit velocity

Procedia PDF Downloads 348
1564 Relation between Sensory Processing Patterns and Working Memory in Autistic Children

Authors: Abbas Nesayan

Abstract:

Background: In recent years, autism has been under consideration in public and research area. Autistic children have dysfunction in communication, socialization, repetitive and stereotyped behaviors. In addition, they clinically suffer from difficulty in attention, challenge with familiar behaviors and sensory processing problems. Several variables are linked to sensory processing problems in autism, one of these variables is working memory. Working memory is part of the executive function which provides the necessary ability to completing multiple stages tasks. Method: This study has categorized in correlational research methods. After determining of entry criteria, according to purposive sampling method, 50 children were selected. Dunn’s sensory profile school companion was used for assessment of sensory processing patterns; behavioral rating inventory of executive functions was used (BRIEF) for assessment of working memory. Pearson correlation coefficient and linear regression were used for data analyzing. Results: The results showed the significant relationship between sensory processing patterns (low registration, sensory seeking, sensory sensitivity and sensory avoiding) with working memory in autistic children. Conclusion: According to the findings, there is the significant relationship between the patterns of sensory processing and working memory. So, in order to improve the working memory could be used some interventions based on the sensory processing.

Keywords: sensory processing patterns, working memory, autism, autistic children

Procedia PDF Downloads 210
1563 Zero Energy Buildings in Hot-Humid Tropical Climates: Boundaries of the Energy Optimization Grey Zone

Authors: Nakul V. Naphade, Sandra G. L. Persiani, Yew Wah Wong, Pramod S. Kamath, Avinash H. Anantharam, Hui Ling Aw, Yann Grynberg

Abstract:

Achieving zero-energy targets in existing buildings is known to be a difficult task requiring important cuts in the building energy consumption, which in many cases clash with the functional necessities of the building wherever the on-site energy generation is unable to match the overall energy consumption. Between the building’s consumption optimization limit and the energy, target stretches a case-specific optimization grey zone, which requires tailored intervention and enhanced user’s commitment. In the view of the future adoption of more stringent energy-efficiency targets in the context of hot-humid tropical climates, this study aims to define the energy optimization grey zone by assessing the energy-efficiency limit in the state-of-the-art typical mid- and high-rise full AC office buildings, through the integration of currently available technologies. Energy models of two code-compliant generic office-building typologies were developed as a baseline, a 20-storey ‘high-rise’ and a 7-storey ‘mid-rise’. Design iterations carried out on the energy models with advanced market ready technologies in lighting, envelope, plug load management and ACMV systems and controls, lead to a representative energy model of the current maximum technical potential. The simulations showed that ZEB targets could be achieved in fully AC buildings under an average of seven floors only by compromising on energy-intense facilities (as full AC, unlimited power-supply, standard user behaviour, etc.). This paper argues that drastic changes must be made in tropical buildings to span the energy optimization grey zone and achieve zero energy. Fully air-conditioned areas must be rethought, while smart technologies must be integrated with an aggressive involvement and motivation of the users to synchronize with the new system’s energy savings goal.

Keywords: energy simulation, office building, tropical climate, zero energy buildings

Procedia PDF Downloads 177
1562 Geomorphology Evidence of Climate Change in Gavkhouni Lagoon, South East Isfahan, Iran

Authors: Manijeh Ghahroudi Tali, Ladan Khedri Gharibvand

Abstract:

Gavkhouni lagoon, in the South East of Isfahan (Iran), is one of the pluvial lakes and legacy of Quaternary era which has emerged during periods with more precipitation and less evaporation. Climate change, lack of water resources and dried freshwater of Zayandehrood resulted in increased entropy and activated a dynamic which in turn is converted to Playa. The morphometry of 61 polygonal clay microforms in wet zone soil, 52 polygonal clay microforms in pediplain zone soil and 63 microforms in sulfate soil, is evaluated by fractal model. After calculating the microforms’ area–perimeter fractal dimension, their turbulence level was analyzed. Fractal dimensions (DAP) obtained from the microforms’ analysis of pediplain zone, wet zone, and sulfate soils are 1/21-1/39, 1/27-1/44 and 1/29-1/41, respectively, which is indicative of turbulence in these zones. Logarithmic graph drawn for each region also shows that there is a linear relationship between logarithm of the microforms’ area and perimeter so that correlation coefficient (R2) obtained for wet zone is larger than 0.96, for pediplain zone is larger than 0.99 and for sulfated zone is 0.9. Increased turbulence in this region suggests morphological transformation of the system and lagoon’s conversion to a new ecosystem which can be accompanied with serious risks.

Keywords: fractal, Gavkhouni, microform, Iran

Procedia PDF Downloads 264
1561 Effects of Local Ground Conditions on Site Response Analysis Results in Hungary

Authors: Orsolya Kegyes-Brassai, Zsolt Szilvágyi, Ákos Wolf, Richard P. Ray

Abstract:

Local ground conditions have a substantial influence on the seismic response of structures. Their inclusion in seismic hazard assessment and structural design can be realized at different levels of sophistication. However, response results based on more advanced calculation methods e.g. nonlinear or equivalent linear site analysis tend to show significant discrepancies when compared to simpler approaches. This project's main objective was to compare results from several 1-D response programs to Eurocode 8 design spectra. Data from in-situ site investigations were used for assessing local ground conditions at several locations in Hungary. After discussion of the in-situ measurements and calculation methods used, a comprehensive evaluation of all major contributing factors for site response is given. While the Eurocode spectra should account for local ground conditions based on soil classification, there is a wide variation in peak ground acceleration determined from 1-D analyses versus Eurocode. Results show that current Eurocode 8 design spectra may not be conservative enough to account for local ground conditions typical for Hungary.

Keywords: 1-D site response analysis, multichannel analysis of surface waves (MASW), seismic CPT, seismic hazard assessment

Procedia PDF Downloads 242
1560 Performance Comparisons between PID and Adaptive PID Controllers for Travel Angle Control of a Bench-Top Helicopter

Authors: H. Mansor, S. B. Mohd-Noor, T. S. Gunawan, S. Khan, N. I. Othman, N. Tazali, R. B. Islam

Abstract:

This paper provides a comparative study on the performances of standard PID and adaptive PID controllers tested on travel angle of a 3-Degree-of-Freedom (3-DOF) Quanser bench-top helicopter. Quanser, a well-known manufacturer of educational bench-top helicopter has developed Proportional Integration Derivative (PID) controller with Linear Quadratic Regulator (LQR) for all travel, pitch and yaw angle of the bench-top helicopter. The performance of the PID controller is relatively good; however its performance could also be improved if the controller is combined with adaptive element. The objective of this research is to design adaptive PID controller and then compare the performances of the adaptive PID with the standard PID. The controller design and test is focused on travel angle control only. Adaptive method used in this project is self-tuning controller, which controller’s parameters are updated online. Two adaptive algorithms those are pole-placement and deadbeat have been chosen as the method to achieve optimal controller’s parameters. Performance comparisons have shown that the adaptive (deadbeat) PID controller has produced more desirable performance compared to standard PID and adaptive (pole-placement). The adaptive (deadbeat) PID controller attained very fast settling time (5 seconds) and very small percentage of overshoot (5% to 7.5%) for 10° to 30° step change of travel angle.

Keywords: adaptive control, deadbeat, pole-placement, bench-top helicopter, self-tuning control

Procedia PDF Downloads 494
1559 Hybrid Algorithm for Non-Negative Matrix Factorization Based on Symmetric Kullback-Leibler Divergence for Signal Dependent Noise: A Case Study

Authors: Ana Serafimovic, Karthik Devarajan

Abstract:

Non-negative matrix factorization approximates a high dimensional non-negative matrix V as the product of two non-negative matrices, W and H, and allows only additive linear combinations of data, enabling it to learn parts with representations in reality. It has been successfully applied in the analysis and interpretation of high dimensional data arising in neuroscience, computational biology, and natural language processing, to name a few. The objective of this paper is to assess a hybrid algorithm for non-negative matrix factorization with multiplicative updates. The method aims to minimize the symmetric version of Kullback-Leibler divergence known as intrinsic information and assumes that the noise is signal-dependent and that it originates from an arbitrary distribution from the exponential family. It is a generalization of currently available algorithms for Gaussian, Poisson, gamma and inverse Gaussian noise. We demonstrate the potential usefulness of the new generalized algorithm by comparing its performance to the baseline methods which also aim to minimize symmetric divergence measures.

Keywords: non-negative matrix factorization, dimension reduction, clustering, intrinsic information, symmetric information divergence, signal-dependent noise, exponential family, generalized Kullback-Leibler divergence, dual divergence

Procedia PDF Downloads 242
1558 Assessment of Material Type, Diameter, Orientation and Closeness of Fibers in Vulcanized Reinforced Rubbers

Authors: Ali Osman Güney, Bahattin Kanber

Abstract:

In this work, the effect of material type, diameter, orientation and closeness of fibers on the general performance of reinforced vulcanized rubbers are investigated using finite element method with experimental verification. Various fiber materials such as hemp, nylon, polyester are used for different fiber diameters, orientations and closeness. 3D finite element models are developed by considering bonded contact elements between fiber and rubber sheet interfaces. The fibers are assumed as linear elastic, while vulcanized rubber is considered as hyper-elastic. After an experimental verification of finite element results, the developed models are analyzed under prescribed displacement that causes tension. The normal stresses in fibers and shear stresses between fibers and rubber sheet are investigated in all models. Large deformation of reinforced rubber sheet also represented with various fiber conditions under incremental loading. A general assessment is achieved about best fiber properties of reinforced rubber sheets for tension-load conditions.

Keywords: reinforced vulcanized rubbers, fiber properties, out of plane loading, finite element method

Procedia PDF Downloads 340
1557 Constructing a Physics Guided Machine Learning Neural Network to Predict Tonal Noise Emitted by a Propeller

Authors: Arthur D. Wiedemann, Christopher Fuller, Kyle A. Pascioni

Abstract:

With the introduction of electric motors, small unmanned aerial vehicle designers have to consider trade-offs between acoustic noise and thrust generated. Currently, there are few low-computational tools available for predicting acoustic noise emitted by a propeller into the far-field. Artificial neural networks offer a highly non-linear and adaptive model for predicting isolated and interactive tonal noise. But neural networks require large data sets, exceeding practical considerations in modeling experimental results. A methodology known as physics guided machine learning has been applied in this study to reduce the required data set to train the network. After building and evaluating several neural networks, the best model is investigated to determine how the network successfully predicts the acoustic waveform. Lastly, a post-network transfer function is developed to remove discontinuity from the predicted waveform. Overall, methodologies from physics guided machine learning show a notable improvement in prediction performance, but additional loss functions are necessary for constructing predictive networks on small datasets.

Keywords: aeroacoustics, machine learning, propeller, rotor, neural network, physics guided machine learning

Procedia PDF Downloads 216
1556 Plackett-Burman Design to Evaluate the Influence of Operating Parameters on Anaerobic Orthophosphate Release from Enhanced Biological Phosphorus Removal Sludge

Authors: Reza Salehi, Peter L. Dold, Yves Comeau

Abstract:

The aim of the present study was to investigate the effect of a total of 6 operating parameters including pH (X1), temperature (X2), stirring speed (X3), chemical oxygen demand (COD) (X4), volatile suspended solids (VSS) (X5) and time (X6) on anaerobic orthophosphate release from enhanced biological phosphorus removal (EBPR) sludge. An 8-run Plackett Burman design was applied and the statistical analysis of the experimental data was performed using Minitab16.2.4 software package. The Analysis of variance (ANOVA) results revealed that temperature, COD, VSS and time had a significant effect with p-values of less than 0.05 whereas pH and stirring speed were identified as non-significant parameters, but influenced orthophosphate release from the EBPR sludge. The mathematic expression obtained by the first-order multiple linear regression model between orthophosphate release from the EBPR sludge (Y) and the operating parameters (X1-X6) was Y=18.59+1.16X1-3.11X2-0.81X3+3.79X4+9.89X5+4.01X6. The model p-value and coefficient of determination (R2) value were 0.026 and of 99.87%, respectively, which indicates the model is significant and the predicted values of orthophosphate release from the EBPR sludge have been excellently correlated with the observed values.

Keywords: anaerobic, operating parameters, orthophosphate release, Plackett-Burman design

Procedia PDF Downloads 275
1555 Developing a Risk Rating Tool for Shopping Centres

Authors: Prandesha Govender, Chris Cloete

Abstract:

Purpose: The objective of the paper is to develop a tool for the evaluation of the financial risk of a shopping center. Methodology: Important factors that indicate the success of a shopping center were identified from the available literature. Weights were allocated to these factors and a risk rating was calculated for 505 shopping centers in the largest province in South Africa by taking the factor scores, factor weights, and category weights into account. The ratings for ten randomly selected shopping centers were correlated with consumer feedback and standardized against the ECAI (External Credit Assessment Institutions) data for the same centers. The ratings were also mapped to corporates with the same risk rating to provide a better intuitive assessment of the meaning of the inherent risk of each center. Results: The proposed risk tool shows a strong linear correlation with consumer views and can be compared to expert opinions, such as that of fund managers and REITs. Interpretation of the tool was also illustrated by correlating the risk rating of selected shopping centers to the risk rating of reputable and established entities. Conclusions: The proposed Shopping Centre Risk Tool, used in conjunction with financial inputs from the relevant center, should prove useful to an investor when the desirability of investment in or expansion, renovation, or purchase of a shopping center is being considered.

Keywords: risk, shopping centres, risk modelling, investment, rating tool, rating scale

Procedia PDF Downloads 111
1554 Response Surface Methodology to Obtain Disopyramide Phosphate Loaded Controlled Release Ethyl Cellulose Microspheres

Authors: Krutika K. Sawant, Anil Solanki

Abstract:

The present study deals with the preparation and optimization of ethyl cellulose-containing disopyramide phosphate loaded microspheres using solvent evaporation technique. A central composite design consisting of a two-level full factorial design superimposed on a star design was employed for optimizing the preparation microspheres. The drug:polymer ratio (X1) and speed of the stirrer (X2) were chosen as the independent variables. The cumulative release of the drug at a different time (2, 6, 10, 14, and 18 hr) was selected as the dependent variable. An optimum polynomial equation was generated for the prediction of the response variable at time 10 hr. Based on the results of multiple linear regression analysis and F statistics, it was concluded that sustained action can be obtained when X1 and X2 are kept at high levels. The X1X2 interaction was found to be statistically significant. The drug release pattern fitted the Higuchi model well. The data of a selected batch were subjected to an optimization study using Box-Behnken design, and an optimal formulation was fabricated. Good agreement was observed between the predicted and the observed dissolution profiles of the optimal formulation.

Keywords: disopyramide phosphate, ethyl cellulose, microspheres, controlled release, Box-Behnken design, factorial design

Procedia PDF Downloads 447
1553 Microdosimetry in Biological Cells: A Monte Carlo Method

Authors: Hamidreza Jabal Ameli, Anahita Movahedi

Abstract:

Purpose: In radionuclide therapy, radioactive atoms are coupled to monoclonal antibodies (mAbs) for treating cancer tumor while limiting radiation to healthy tissues. We know that tumoral and normal tissues are not equally sensitive to radiation. In fact, biological effects such as cellular repair processes or the presence of less radiosensitive cells such as hypoxic cells should be taken account. For this reason, in this paper, we want to calculate biological effect dose (BED) inside tumoral area and healthy cells around tumors. Methods: In this study, deposited doses of a radionuclide, gold-198, inside cells lattice and surrounding healthy tissues were calculated with Monte Carlo method. The elemental compositions and density of malignant and healthy tissues were obtained from ICRU Report 44. For reaching to real condition of oxygen effects, the necrosis and hypoxia area inside tumors has been assessed. Results: With regard to linear-quadratic expression which was defined in Monte Carlo, results showed that a large amount of BED is deposited in the well-oxygenated part of the hypoxia area compared to necrosis area. Moreover, there is a significant difference between the curves of absorbed dose with BED and without BED.

Keywords: biological dose, monte carlo, hypoxia, radionuclide therapy

Procedia PDF Downloads 479
1552 Experimental Study on Use of Crumb Rubber to Mitigate Expansive Soil Pressures on Basement Walls

Authors: Kwestan Salimi, Jenna Jacoby, Michelle Basham, Amy Cerato

Abstract:

The extreme annual weather patterns of the central United States have increased the need for underground shelters for protection from destructive tornadic activity. However, very few residential homes have basements due to the added construction expense and the prevalence of expansive soils covering the central portion of the United States. These expansive soils shrink and swell, increasing earth pressure on basement walls. To mitigate the effect of expansive soils on basement walls, this study performed bench-scale tests using a common natural expansive soil mitigated with a backfill layer of crumb rubber. The results revealed that at 80% soil compaction, a 1:6 backfill height to total height ratio produced a 66% reduction in swell pressure. However, this percent reduction decreased to 27% for 90% soil compaction. It was also found that there is a strong linear correlation between compaction percentage and reduction in swell pressure when using the same backfill height to total height ratio. Using this correlation and extrapolating to 95% compaction, the percent reduction in swell pressure was approximately 12%.

Keywords: expansive soils, swell/shrink, swell pressure, stabilization, crumb rubber

Procedia PDF Downloads 153
1551 Design and Implementation Guidance System of Guided Rocket RKX-200 Using Optimal Guidance Law

Authors: Amalia Sholihati, Bambang Riyanto Trilaksono

Abstract:

As an island nation, is a necessity for the Republic of Indonesia to have a capable military defense on land, sea or air that the development of military weapons such as rockets for air defense becomes very important. RKX rocket-200 is one of the guided missiles which are developed by consortium Indonesia and coordinated by LAPAN that serve to intercept the target. RKX-200 is designed to have the speed of Mach 0.5-0.9. RKX rocket-200 belongs to the category two-stage rocket that control is carried out on the second stage when the rocket has separated from the booster. The requirement for better performance to intercept missiles with higher maneuverability continues to push optimal guidance law development, which is derived from non-linear equations. This research focused on the design and implementation of a guidance system based OGL on the rocket RKX-200 while considering the limitation of rockets such as aerodynamic rocket and actuator. Guided missile control system has three main parts, namely, guidance system, navigation system and autopilot systems. As for other parts such as navigation systems and other supporting simulated on MATLAB based on the results of previous studies. In addition to using the MATLAB simulation also conducted testing with hardware-based ARM TWR-K60D100M conjunction with a navigation system and nonlinear models in MATLAB using Hardware-in-the-Loop Simulation (HILS).

Keywords: RKX-200, guidance system, optimal guidance law, Hils

Procedia PDF Downloads 246
1550 A Hybrid Data Mining Algorithm Based System for Intelligent Defence Mission Readiness and Maintenance Scheduling

Authors: Shivam Dwivedi, Sumit Prakash Gupta, Durga Toshniwal

Abstract:

It is a challenging task in today’s date to keep defence forces in the highest state of combat readiness with budgetary constraints. A huge amount of time and money is squandered in the unnecessary and expensive traditional maintenance activities. To overcome this limitation Defence Intelligent Mission Readiness and Maintenance Scheduling System has been proposed, which ameliorates the maintenance system by diagnosing the condition and predicting the maintenance requirements. Based on new data mining algorithms, this system intelligently optimises mission readiness for imminent operations and maintenance scheduling in repair echelons. With modified data mining algorithms such as Weighted Feature Ranking Genetic Algorithm and SVM-Random Forest Linear ensemble, it improves the reliability, availability and safety, alongside reducing maintenance cost and Equipment Out of Action (EOA) time. The results clearly conclude that the introduced algorithms have an edge over the conventional data mining algorithms. The system utilizing the intelligent condition-based maintenance approach improves the operational and maintenance decision strategy of the defence force.

Keywords: condition based maintenance, data mining, defence maintenance, ensemble, genetic algorithms, maintenance scheduling, mission capability

Procedia PDF Downloads 285
1549 Investigating the Effect of Using Amorphous Silica Ash Obtained from Rice Husk as a Partial Replacement of Ordinary Portland Cement on the Mechanical and Microstructure Properties of Cement Paste and Mortar

Authors: Aliyu Usman, Muhaammed Bello Ibrahim, Yusuf D. Amartey, Jibrin M. Kaura

Abstract:

This research is aimed at investigating the effect of using amorphous silica ash (ASA) obtained from rice husk as a partial replacement of ordinary Portland cement (OPC) on the mechanical and microstructure properties of cement paste and mortar. ASA was used in partial replacement of ordinary Portland cement in the following percentages 3 percent, 5 percent, 8 percent and 10 percent. These partial replacements were used to produce Cement-ASA paste and Cement-ASA mortar. ASA was found to contain all the major chemical compounds found in cement with the exception of alumina, which are SiO2 (91.5%), CaO (2.84%), Fe2O3 (1.96%), and loss on ignition (LOI) was found to be 9.18%. It also contains other minor oxides found in cement. Consistency of Cement-ASA paste was found to increase with increase in ASA replacement. Likewise, the setting time and soundness of the Cement-ASA paste also increases with increase in ASA replacements. The test on hardened mortar were destructive in nature which include flexural strength test on prismatic beam (40mm x 40mm x 160mm) at 2, 7, 14 and 28 days curing and compressive strength test on the cube size (40mm x 40mm, by using the auxiliary steel platens) at 2,7,14 and 28 days curing. The Cement-ASA mortar flexural and compressive strengths were found to be increasing with curing time and decreases with cement replacement by ASA. It was observed that 5 percent replacement of cement with ASA attained the highest strength for all the curing ages and all the percentage replacements attained the targeted compressive strength of 6N/mm2 for 28 days. There is an increase in the drying shrinkage of Cement-ASA mortar with curing time, it was also observed that the drying shrinkages for all the curing ages were greater than the control specimen all of which were greater than the code recommendation of less than 0.03%. The scanning electron microscope (SEM) was used to study the Cement-ASA mortar microstructure and to also look for hydration product and morphology.

Keywords: amorphous silica ash, cement mortar, cement paste, scanning electron microscope

Procedia PDF Downloads 428
1548 Microfluidic Fluid Shear Mechanotransduction Device Using Linear Optimization of Hydraulic Channels

Authors: Sanat K. Dash, Rama S. Verma, Sarit K. Das

Abstract:

A logarithmic microfluidic shear device was designed and fabricated for cellular mechanotransduction studies. The device contains four cell culture chambers in which flow was modulated to achieve a logarithmic increment. Resistance values were optimized to make the device compact. The network of resistances was developed according to a unique combination of series and parallel resistances as found via optimization. Simulation results done in Ansys 16.1 matched the analytical calculations and showed the shear stress distribution at different inlet flow rates. Fabrication of the device was carried out using conventional photolithography and PDMS soft lithography. Flow profile was validated taking DI water as working fluid and measuring the volume collected at all four outlets. Volumes collected at the outlets were in accordance with the simulation results at inlet flow rates ranging from 1 ml/min to 0.1 ml/min. The device can exert fluid shear stresses ranging four orders of magnitude on the culture chamber walls which will cover shear stress values from interstitial flow to blood flow. This will allow studying cell behavior in the long physiological range of shear stress in a single run reducing number of experiments.

Keywords: microfluidics, mechanotransduction, fluid shear stress, physiological shear

Procedia PDF Downloads 126
1547 A Study of Industry 4.0 and Digital Transformation

Authors: Ibrahim Bashir, Yahaya Y. Yusuf

Abstract:

The ongoing shift towards Industry 4.0 represents a critical growth factor in the industrial enterprise, where the digital transformation of industries is increasingly seen as a crucial element for competitiveness. This transformation holds substantial potential, yet its full benefits have yet to be realized due to the fragmented approach to introducing Industry 4.0 technologies. Therefore, this pilot study aims to explore the individual and collective impact of Industry 4.0 technologies and digital transformation on organizational performance. Data were collected through a questionnaire-based survey across 51 companies in the manufacturing industry in the United Kingdom. The correlations and multiple linear regression analyses were conducted to assess the relationship and impact between the variables in the study. The results show that Industry 4.0 and digital transformation positively influence organizational performance and that Industry 4.0 technologies positively influence digital transformation. The results of this pilot study indicate that the implementation of Industry 4.0 technology is vital for increasing organizational performance; however, their roles differ largely. The differences are manifest in how the types of Industry 4.0 technologies correlate with how organizations integrate digital technologies into their operations. Hence, there is a clear indication of a strong correlation between Industry 4.0 technology, digital transformation, and organizational performance. Consequently, our study presents numerous pertinent implications that propel the theory of I4.0, digital business transformation (DBT), and organizational performance forward, as well as guide managers in the manufacturing sector.

Keywords: industry 4.0 technologies, digital transformation, digital integration, organizational performance

Procedia PDF Downloads 126
1546 An Empirical Study of Students’ Learning Attitude, Problem-solving Skills and Learning Engagement in an Online Internship Course During Pandemic

Authors: PB Venkataraman

Abstract:

Most of the real-life problems are ill-structured. They do not have a single solution but many competing solutions. The solution paths are non-linear and ambiguous, and the problem definition itself is many times a challenge. Students of professional education learn to solve such problems through internships. The current pandemic situation has constrained on-site internship opportunities; thus the students have no option but to pursue this learning online. This research assessed the learning gain of four undergraduate students in engineering as they undertook an online internship in an organisation over a period of eight weeks. A clinical interview at the end of the internship provided the primary data to assess the team’s problem-solving skills using a tested rubric. In addition to this, change in their learning attitudes were assessed through a pre-post study using a repurposed CLASS instrument for Electrical Engineering. Analysis of CLASS data indicated a shift in the sophistication of their learning attitude. A learning engagement survey adopting a 6-point Likert scale showed active participation and motivation in learning. We hope this new research will stimulate educators to exploit online internships even beyond the time of pandemic as more and more business operations are transforming into virtual.

Keywords: ill-structured problems, learning attitudes, internship, assessment, student engagement

Procedia PDF Downloads 199
1545 Electrical Transport in Bi₁Sb₁Te₁.₅Se₁.₅ /α-RuCl₃ Heterostructure Nanodevices

Authors: Shoubhik Mandal, Debarghya Mallick, Abhishek Banerjee, R. Ganesan, P. S. Anil Kumar

Abstract:

We report magnetotransport measurements in Bi₁Sb₁Te₁.₅Se₁.₅/RuCl₃ heterostructure nanodevices. Bi₁Sb₁Te₁.₅Se₁.₅ (BSTS) is a strong three-dimensional topological insulator (3D-TI) that hosts conducting topological surface states (TSS) enclosing an insulating bulk. α-RuCl₃ (namely, RuCl₃) is an anti-ferromagnet that is predicted to behave as a Kitaev-like quantum spin liquid carrying Majorana excitations. Temperature (T)-dependent resistivity measurements show the interplay between parallel bulk and surface transport channels. At T < 150 K, surface state transport dominates over bulk transport. Multi-channel weak anti-localization (WAL) is observed, as a sharp cusp in the magnetoconductivity, indicating strong spin-orbit coupling. The presence of top and bottom topological surface states (TSS), including a pair of electrically coupled Rashba surface states (RSS), are indicated. Non-linear Hall effect, explained by a two-band model, further supports this interpretation. Finally, a low-T logarithmic resistance upturn is analyzed using the Lu-Shen model, supporting the presence of gapless surface states with a π Berry phase.

Keywords: topological materials, electrical transport, Lu-Shen model, quantum spin liquid

Procedia PDF Downloads 112
1544 Implicit Transaction Costs and the Fundamental Theorems of Asset Pricing

Authors: Erindi Allaj

Abstract:

This paper studies arbitrage pricing theory in financial markets with transaction costs. We extend the existing theory to include the more realistic possibility that the price at which the investors trade is dependent on the traded volume. The investors in the market always buy at the ask and sell at the bid price. Transaction costs are composed of two terms, one is able to capture the implicit transaction costs and the other the price impact. Moreover, a new definition of a self-financing portfolio is obtained. The self-financing condition suggests that continuous trading is possible, but is restricted to predictable trading strategies which have left and right limit and finite quadratic variation. That is, predictable trading strategies of infinite variation and of finite quadratic variation are allowed in our setting. Within this framework, the existence of an equivalent probability measure is equivalent to the absence of arbitrage opportunities, so that the first fundamental theorem of asset pricing (FFTAP) holds. It is also proved that, when this probability measure is unique, any contingent claim in the market is hedgeable in an L2-sense. The price of any contingent claim is equal to the risk-neutral price. To better understand how to apply the theory proposed we provide an example with linear transaction costs.

Keywords: arbitrage pricing theory, transaction costs, fundamental theorems of arbitrage, financial markets

Procedia PDF Downloads 350
1543 Evaluation of the Self-Efficacy and Learning Experiences of Final year Students of Computer Science of Southwest Nigerian Universities

Authors: Olabamiji J. Onifade, Peter O. Ajayi, Paul O. Jegede

Abstract:

This study aimed at investigating the preparedness of the undergraduate final year students of Computer Science as the next entrants into the workplace. It assessed their self-efficacy in computational tasks and examined the relationship between their self-efficacy and their learning experiences in Southwest Nigerian universities. The study employed a descriptive survey research design. The population of the study comprises all the final year students of Computer Science. A purposive sampling technique was adopted in selecting a representative sample of interest from the final year students of Computer Science. The Students’ Computational Task Self-Efficacy Questionnaire (SCTSEQ) was used to collect data. Mean, standard deviation, frequency, percentages, and linear regression were used for data analysis. The result obtained revealed that the final year students of Computer Science were averagely confident in performing computational tasks, and there is a significant relationship between the learning experiences of the students and their self-efficacy. The study recommends that the curriculum be improved upon to accommodate industry experts as lecturers in some of the courses, make provision for more practical sessions, and the learning experiences of the student be considered an important component in the undergraduate Computer Science curriculum development process.

Keywords: computer science, learning experiences, self-efficacy, students

Procedia PDF Downloads 135
1542 The Impacts of an Adapted Literature Circle Model on Reading Comprehension, Engagement, and Cooperation in an EFL Reading Course

Authors: Tiantian Feng

Abstract:

There is a dearth of research on the literary circle as a teaching strategy in English as a Foreign Language (EFL) classes in Chinese colleges and universities and even fewer empirical studies on its impacts. In this one-quarter, design-based project, the researcher aims to increase students’ engagement, cooperation, and, on top of that, reading comprehension performance by utilizing a researcher-developed, adapted reading circle model in an EFL reading course at a Chinese college. The model also integrated team-based learning and portfolio assessment, with an emphasis on the specialization of individual responsibilities, contributions, and outcomes in reading projects, with the goal of addressing current issues in EFL classes at Chinese colleges, such as passive learning, test orientation, ineffective and uncooperative teamwork, and lack of dynamics. In this quasi-experimental research, two groups of students enrolled in the course were invited to participate in four in-class team projects, with the intervention class following the adapted literature circle model and team members rotating as Leader, Coordinator, Brain trust, and Reporter. The researcher/instructor used a sequential explanatory mixed-methods approach to quantitatively analyze the final grades for the pre-and post-tests, as well as individual scores for team projects and will code students' artifacts in the next step, with the results to be reported in a subsequent paper(s). Initial analysis showed that both groups saw an increase in final grades, but the intervention group enjoyed a more significant boost, suggesting that the adapted reading circle model is effective in improving students’ reading comprehension performance. This research not only closes the empirical research gap of literature circles in college EFL classes in China but also adds to the pool of effective ways to optimize reading comprehension performance and class performance in college EFL classes.

Keywords: literature circle, EFL teaching, college english reading, reading comprehension

Procedia PDF Downloads 96
1541 Development of Pre-Mitigation Measures and Its Impact on Life-Cycle Cost of Facilities: Indian Scenario

Authors: Mahima Shrivastava, Soumya Kar, B. Swetha Malika, Lalu Saheb, M. Muthu Kumar, P. V. Ponambala Moorthi

Abstract:

Natural hazards and manmade destruction causes both economic and societal losses. Generalized pre-mitigation strategies introduced and adopted for prevention of disaster all over the world are capable of augmenting the resiliency and optimizing the life-cycle cost of facilities. In countries like India where varied topographical feature exists requires location specific mitigation measures and strategies to be followed for better enhancement by event-driven and code-driven approaches. Present state of vindication measures followed and adopted, lags dominance in accomplishing the required development. In addition, serious concern and debate over climate change plays a vital role in enhancing the need and requirement for the development of time bound adaptive mitigation measures. For the development of long-term sustainable policies incorporation of future climatic variation is inevitable. This will further assist in assessing the impact brought about by the climate change on life-cycle cost of facilities. This paper develops more definite region specific and time bound pre-mitigation measures, by reviewing the present state of mitigation measures in India and all over the world for improving life-cycle cost of facilities. For the development of region specific adoptive measures, Indian regions were divided based on multiple-calamity prone regions and geo-referencing tools were used to incorporate the effect of climate changes on life-cycle cost assessment. This study puts forward significant effort in establishing sustainable policies and helps decision makers in planning for pre-mitigation measures for different regions. It will further contribute towards evaluating the life cycle cost of facilities by adopting the developed measures.

Keywords: climate change, geo-referencing tools, life-cycle cost, multiple-calamity prone regions, pre-mitigation strategies, sustainable policies

Procedia PDF Downloads 374
1540 Attribute Based Comparison and Selection of Modular Self-Reconfigurable Robot Using Multiple Attribute Decision Making Approach

Authors: Manpreet Singh, V. P. Agrawal, Gurmanjot Singh Bhatti

Abstract:

From the last decades, there is a significant technological advancement in the field of robotics, and a number of modular self-reconfigurable robots were introduced that can help in space exploration, bucket to stuff, search, and rescue operation during earthquake, etc. As there are numbers of self-reconfigurable robots, choosing the optimum one is always a concern for robot user since there is an increase in available features, facilities, complexity, etc. The objective of this research work is to present a multiple attribute decision making based methodology for coding, evaluation, comparison ranking and selection of modular self-reconfigurable robots using a technique for order preferences by similarity to ideal solution approach. However, 86 attributes that affect the structure and performance are identified. A database for modular self-reconfigurable robot on the basis of different pertinent attribute is generated. This database is very useful for the user, for selecting a robot that suits their operational needs. Two visual methods namely linear graph and spider chart are proposed for ranking of modular self-reconfigurable robots. Using five robots (Atron, Smores, Polybot, M-Tran 3, Superbot), an example is illustrated, and raking of the robots is successfully done, which shows that Smores is the best robot for the operational need illustrated, and this methodology is found to be very effective and simple to use.

Keywords: self-reconfigurable robots, MADM, TOPSIS, morphogenesis, scalability

Procedia PDF Downloads 212