Search results for: transmission error
557 Dendrimer-Encapsulated N, Pt Co-Doped TiO₂ for the Photodegration of Contaminated Wastewater
Authors: S. K. M. Nzaba, H. H. Nyoni, B. Ntsendwana, B. B. Mamba, A. T. Kuvarega
Abstract:
Azo dye effluents, released into water bodies are not only toxic to the ecosystem but also pose a serious impact on human health due to the carcinogenic and mutagenic effects of the compounds present in the dye discharge. Conventional water treatment methods such as adsorption, flocculation/coagulation and biological processes are not effective in completely removing most of the dyes and their natural degradation by-products. Advanced oxidation processes (AOPs) have proven to be effective technologies for complete mineralization of these recalcitrant pollutants. Therefore, there is a need for new technology that can solve the problem. Thus, this study examined the photocatalytic degradation of an azo dye brilliant black (BB) using non-metal/metal codoped TiO₂. N, Pt co-doped TiO₂ photocatalysts were prepared by a modified sol-gel method using amine-terminated polyamidoamine dendrimer generation 0 (PAMAM G0), amine-terminated polyamidoamine dendrimer generation 1 ( PAMAM G1) and hyperbranched polyethyleneimine (HPEI) as templates and source of nitrogen. Structural, morphological, and textural properties were evaluated using scanning electron microscopy coupled to energy dispersive X-ray spectroscopy (SEM/EDX), high-resolution transmission electron microscopy (HRTEM), X-ray diffraction spectroscopy (XRD), X-ray photoelectron spectroscopy (XPS), thermal gravimetric analysis (TGA), Fourier- transform infrared (FTIR), Raman spectroscopy (RS), photoluminescence (PL) and ultra-violet /visible spectroscopy (UV-Vis). The synthesized photocatalysts exhibited lower band gap energies as compared to the Degussa P-25 revealing a red shift in band gap towards the visible light absorption region. Photocatalytic activity of N, Pt co-doped TiO₂ was measured by the reaction of photocatalytic degradation of brilliant black (BB) dye. The N, metal codoped TiO₂ containing 0.5 wt. % of the metal consisted mainly of the anatase phase as confirmed by XRD results of all three samples, with a particle size range of 13–30 nm. The particles were largely spherical and shifted the absorption edge well into the visible region. Band gap reduction was more pronounced for the N, Pt HPEI (Pt 0.5 wt. %) codoped TiO₂ compared to PAMAM G0 and PAMAM G1. Consequently, codoping led to an enhancement in the photocatalytic activity of the materials for the degradation of brilliant black (BB).Keywords: codoped TiO₂, dendrimer, photodegradation, wastewater
Procedia PDF Downloads 173556 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients
Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi
Abstract:
Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.Keywords: frailty model, latent variables, liver cirrhosis, parametric distribution
Procedia PDF Downloads 261555 Comparative Analysis of a Self-Supporting Wall of Granite Slabs in a Multi-Leaves Enclosure System
Authors: Miguel Angel Calvo Salve
Abstract:
Building enclosures and façades not only have an aesthetic component they must also ensure thermal comfort and improve the acoustics and air quality in buildings. The role of facades design, its assemblies, and construction are key in developing a greener future in architecture. This research and study focus on the design of a multi-leaves building envelope, with a self-supporting wall of granite slabs. The study will demonstrate the advantages of its use in compare with the hanging stone veneer in a vented cladding system. Using the Design of the School of Music and Theatre of the Atlantic Area in Spain as a case study where the multi-leaves enclosure system consists in a self-supported outer leaf of large granite slabs of 15cm. of thickness, a vent cavity with thermal isolation, a brick wall, and a series of internal layers. The methodology used were simulations and data collected in building. The advantages of the self-supporting wall of granite slabs in the outer leaf (15cm). compared with a hanging stone veneer in a vented cladding system can summarize the goals as follows: Using the stone in more natural way, by compression. The weight of the stone slabs goes directly to a strip-footing and don't overload the reinforced concrete structure of the building. The weight of the stone slabs provides an external aerial soundproofing, preventing the sound transmission to the structure. The thickness of the stone slabs is enough to provide the external waterproofing of the building envelope. The self-supporting system with minimum anchorages allows having a continuous and external thermal isolation without thermal bridges. The thickness of ashlars masonry provides a thermal inertia that balances the temperatures between day and night in the external thermal insulation layer. The absence of open joints gives the quality of a continuous envelope transmitting the sensations of the stone, the heaviness in the facade, the rhythm of the music and the sequence of the theatre. The main cost of stone due his bigger thickness is more than compensated with the reduction in assembly costs. Don´t need any substructure systems for hanging stone veneers.Keywords: self-supporting wall, stone cladding systems, hanging veneer cladding systems, sustainability of facade systems
Procedia PDF Downloads 62554 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control
Procedia PDF Downloads 162553 National Assessment for Schools in Saudi Arabia: Score Reliability and Plausible Values
Authors: Dimiter M. Dimitrov, Abdullah Sadaawi
Abstract:
The National Assessment for Schools (NAFS) in Saudi Arabia consists of standardized tests in Mathematics, Reading, and Science for school grade levels 3, 6, and 9. One main goal is to classify students into four categories of NAFS performance (minimal, basic, proficient, and advanced) by schools and the entire national sample. The NAFS scoring and equating is performed on a bounded scale (D-scale: ranging from 0 to 1) in the framework of the recently developed “D-scoring method of measurement.” The specificity of the NAFS measurement framework and data complexity presented both challenges and opportunities to (a) the estimation of score reliability for schools, (b) setting cut-scores for the classification of students into categories of performance, and (c) generating plausible values for distributions of student performance on the D-scale. The estimation of score reliability at the school level was performed in the framework of generalizability theory (GT), with students “nested” within schools and test items “nested” within test forms. The GT design was executed via a multilevel modeling syntax code in R. Cut-scores (on the D-scale) for the classification of students into performance categories was derived via a recently developed method of standard setting, referred to as “Response Vector for Mastery” (RVM) method. For each school, the classification of students into categories of NAFS performance was based on distributions of plausible values for the students’ scores on NAFS tests by grade level (3, 6, and 9) and subject (Mathematics, Reading, and Science). Plausible values (on the D-scale) for each individual student were generated via random selection from a statistical logit-normal distribution with parameters derived from the student’s D-score and its conditional standard error, SE(D). All procedures related to D-scoring, equating, generating plausible values, and classification of students into performance levels were executed via a computer program in R developed for the purpose of NAFS data analysis.Keywords: large-scale assessment, reliability, generalizability theory, plausible values
Procedia PDF Downloads 18552 Time to Second Line Treatment Initiation Among Drug-Resistant Tuberculosis Patients in Nepal
Authors: Shraddha Acharya, Sharad Kumar Sharma, Ratna Bhattarai, Bhagwan Maharjan, Deepak Dahal, Serpahine Kaminsa
Abstract:
Background: Drug-resistant (DR) tuberculosis (TB) continues to be a threat in Nepal, with an estimated 2800 new cases every year. The treatment of DR-TB with second line TB drugs is complex and takes longer time with comparatively lower treatment success rate than drug-susceptible TB. Delay in treatment initiation for DR-TB patients might further result in unfavorable treatment outcomes and increased transmission. This study thus aims to determine median time taken to initiate second-line treatment among Rifampicin Resistant (RR) diagnosed TB patients and to assess the proportion of treatment delays among various type of DR-TB cases. Method: A retrospective cohort study was done using national routine electronic data (DRTB and TB Laboratory Patient Tracking System-DHIS2) on drug resistant tuberculosis patients between January 2020 and December 2022. The time taken for treatment initiation was computed as– days from first diagnosis as RR TB through Xpert MTB/Rif test to enrollment on second-line treatment. The treatment delay (>7 days after diagnosis) was calculated. Results: Among total RR TB cases (N=954) diagnosed via Xpert nationwide, 61.4% were enrolled under shorter-treatment regimen (STR), 33.0% under longer treatment regimen (LTR), 5.1% for Pre-extensively drug resistant TB (Pre-XDR) and 0.4% for Extensively drug resistant TB (XDR) treatment. Among these cases, it was found that the median time from diagnosis to treatment initiation was 6 days (IQR:2-15.8). The median time was 5 days (IQR:2.0-13.3) among STR, 6 days (IQR:3.0-15.0) among LTR, 30 days (IQR:5.5-66.8) among Pre-XDR and 4 days (IQR:2.5-9.0) among XDR TB cases. The overall treatment delay (>7 days after diagnosis) was observed in 42.4% of the patients, among which, cases enrolled under Pre-XDR contributed substantially to treatment delay (72.0%), followed by LTR (43.6%), STR (39.1%) and XDR (33.3%). Conclusion: Timely diagnosis and prompt treatment initiation remain fundamental focus of the National TB program. The findings of the study, however suggest gaps in timeliness of treatment initiation for the drug-resistant TB patients, which could bring adverse treatment outcomes. Moreover, there is an alarming delay in second line treatment initiation for the Pre-XDR TB patients. Therefore, this study generates evidence to identify existing gaps in treatment initiation and highlights need for formulating specific policies and intervention in creating effective linkage between the RR TB diagnosis and enrollment on second line TB treatment with intensified efforts from health providers for follow-ups and expansion of more decentralized, adequate, and accessible diagnostic and treatment services for DR-TB, especially Pre-XDR TB cases, due to the observed long treatment delays.Keywords: drug-resistant, tuberculosis, treatment initiation, Nepal, treatment delay
Procedia PDF Downloads 84551 Neuroevolution Based on Adaptive Ensembles of Biologically Inspired Optimization Algorithms Applied for Modeling a Chemical Engineering Process
Authors: Sabina-Adriana Floria, Marius Gavrilescu, Florin Leon, Silvia Curteanu, Costel Anton
Abstract:
Neuroevolution is a subfield of artificial intelligence used to solve various problems in different application areas. Specifically, neuroevolution is a technique that applies biologically inspired methods to generate neural network architectures and optimize their parameters automatically. In this paper, we use different biologically inspired optimization algorithms in an ensemble strategy with the aim of training multilayer perceptron neural networks, resulting in regression models used to simulate the industrial chemical process of obtaining bricks from silicone-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. In addition, the initial conditions that were taken into account during the design and commissioning of the installation can change over time, which leads to the need to add new mixes to adjust the operating conditions for the desired purpose, e.g., material properties and energy saving. The present approach follows the study by simulation of a process of obtaining bricks from silicone-based materials, i.e., the modeling and optimization of the process. Optimization aims to determine the working conditions that minimize the emissions represented by nitrogen monoxide. We first use a search procedure to find the best values for the parameters of various biologically inspired optimization algorithms. Then, we propose an adaptive ensemble strategy that uses only a subset of the best algorithms identified in the search stage. The adaptive ensemble strategy combines the results of selected algorithms and automatically assigns more processing capacity to the more efficient algorithms. Their efficiency may also vary at different stages of the optimization process. In a given ensemble iteration, the most efficient algorithms aim to maintain good convergence, while the less efficient algorithms can improve population diversity. The proposed adaptive ensemble strategy outperforms the individual optimizers and the non-adaptive ensemble strategy in convergence speed, and the obtained results provide lower error values.Keywords: optimization, biologically inspired algorithm, neuroevolution, ensembles, bricks, emission minimization
Procedia PDF Downloads 116550 Propolis as Antioxidant Formulated in Nanoemulsion
Authors: Rachmat Mauludin, Irda Fidrianny, Dita Sasri Primaviri, Okti Alifiana
Abstract:
Natural products such as propolis, green tea and corncob are containing several compounds called antioxidant. Antioxidant can be used in topical application to protect skin against free radical, prevent skin cancer and skin aging. Previous study showed that the extract of propolis that has the highest antioxidant activity was ethanolic extract of propolis (EEP). It is important to make a dosage form that could keep the stability and could protect the effectiveness of antioxidant activity of the extracts. In this research, nanoemulsion (NE) was chosen to formulate those natural products. NE is a dispersion system between oil phase and water phase that formed by mechanical force with a lot amount of surfactants and has globule size below 100 nm. In pharmaceutical industries, NE was preferable for its stability, biodegradability, biocompatibility, its ease to be absorbed and eliminated, and for its use as carrier for lipophilic drugs. First, all of the natural products were extracted using reflux methods. Green tea and corncob were extracted using 96% ethanol while propolis using 70% ethanol. Then, the extracts were concentrated using rotavapor to obtain viscous extracts. The yield of EEP was 11.12%; green tea extract (GTE) was 23.37%; and corncob extract (CCE) was 17.23%. EEP contained steroid/triterpenoid, flavonoid and saponin. GTE contained flavonoid, tannin, and quinone while CCE contained flavonoid, phenol and tannin. The antioxidant activities of the extracts were then measured using DPPH scavenging capacity methods. The values of DPPH scavenging capacity were 61.14% for EEP; 97.16% for GTE; and 78.28% for CCE. The value of IC50 for EEP was 0.41629 ppm. After the extracts were evaluated, NE was prepared. Several surfactants and co-surfactants were used in many combinations and ratios in order to form a NE. Tween 80 and Kolliphor RH40 were used as surfactants while glycerin and propylene glycol were used as co-surfactants. The best NE consists of 26.25% of Kolliphor RH40; 8.75% of glycerin; 5% of rice bran oil; 3% of extracts; and 57% of water. EEP NE had globule size around 23.72 nm; polydispersity index below 0.5; and did not cause any irritation on rabbits. EEP NE was proven to be stable after passing stability test within 63 days at room temperature and 6 cycles of Freeze and Thaw test without separated. Based on TEM (Transmission Electron Microscopy) test, EEP NE had spherical structure with most of its size below 50 nm. The antioxidant activity of EEP NE was monitored for 6 weeks and showed no significant difference. The value of DPPH scavenging capacity for EEP NE was around 58%; for GTE NE was 96.75%; and for CCE NE was 55.69%.Keywords: propolis, green tea, corncob, antioxidant, nanoemulsion
Procedia PDF Downloads 321549 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria
Authors: O. O. Aiyelokun, O. A. Agbede
Abstract:
Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.Keywords: boundary condition, goodness of fit, groundwater, satellite-based data
Procedia PDF Downloads 130548 Short Life Cycle Time Series Forecasting
Authors: Shalaka Kadam, Dinesh Apte, Sagar Mainkar
Abstract:
The life cycle of products is becoming shorter and shorter due to increased competition in market, shorter product development time and increased product diversity. Short life cycles are normal in retail industry, style business, entertainment media, and telecom and semiconductor industry. The subject of accurate forecasting for demand of short lifecycle products is of special enthusiasm for many researchers and organizations. Due to short life cycle of products the amount of historical data that is available for forecasting is very minimal or even absent when new or modified products are launched in market. The companies dealing with such products want to increase the accuracy in demand forecasting so that they can utilize the full potential of the market at the same time do not oversupply. This provides the challenge to develop a forecasting model that can forecast accurately while handling large variations in data and consider the complex relationships between various parameters of data. Many statistical models have been proposed in literature for forecasting time series data. Traditional time series forecasting models do not work well for short life cycles due to lack of historical data. Also artificial neural networks (ANN) models are very time consuming to perform forecasting. We have studied the existing models that are used for forecasting and their limitations. This work proposes an effective and powerful forecasting approach for short life cycle time series forecasting. We have proposed an approach which takes into consideration different scenarios related to data availability for short lifecycle products. We then suggest a methodology which combines statistical analysis with structured judgement. Also the defined approach can be applied across domains. We then describe the method of creating a profile from analogous products. This profile can then be used for forecasting products with historical data of analogous products. We have designed an application which combines data, analytics and domain knowledge using point-and-click technology. The forecasting results generated are compared using MAPE, MSE and RMSE error scores. Conclusion: Based on the results it is observed that no one approach is sufficient for short life-cycle forecasting and we need to combine two or more approaches for achieving the desired accuracy.Keywords: forecast, short life cycle product, structured judgement, time series
Procedia PDF Downloads 358547 Effect of Perceived Importance of a Task in the Prospective Memory Task
Authors: Kazushige Wada, Mayuko Ueda
Abstract:
In the present study, we reanalyzed lapse errors in the last phase of a job, by re-counting near lapse errors and increasing the number of participants. We also examined the results of this study from the perspective of prospective memory (PM), which concerns future actions. This study was designed to investigate whether perceiving the importance of PM tasks caused lapse errors in the last phase of a job and to determine if such errors could be explained from the perspective of PM processing. Participants (N = 34) conducted a computerized clicking task, in which they clicked on 10 figures that they had learned in advance in 8 blocks of 10 trials. Participants were requested to click the check box in the start display of a block and to click the checking off box in the finishing display. This task was a PM task. As a measure of PM performance, we counted the number of omission errors caused by forgetting to check off in the finishing display, which was defined as a lapse error. The perceived importance was manipulated by different instructions. Half the participants in the highly important task condition were instructed that checking off was very important, because equipment would be overloaded if it were not done. The other half in the not important task condition was instructed only about the location and procedure for checking off. Furthermore, we controlled workload and the emotion of surprise to confirm the effect of demand capacity and attention. To manipulate emotions during the clicking task, we suddenly presented a photo of a traffic accident and the sound of a skidding car followed by an explosion. Workload was manipulated by requesting participants to press the 0 key in response to a beep. Results indicated too few forgetting induced lapse errors to be analyzed. However, there was a weak main effect of the perceived importance of the check task, in which the mouse moved to the “END” button before moving to the check box in the finishing display. Especially, the highly important task group showed more such near lapse errors, than the not important task group. Neither surprise, nor workload affected the occurrence of near lapse errors. These results imply that high perceived importance of PM tasks impair task performance. On the basis of the multiprocess framework of PM theory, we have suggested that PM task performance in this experiment relied not on monitoring PM tasks, but on spontaneous retrieving.Keywords: prospective memory, perceived importance, lapse errors, multi process framework of prospective memory.
Procedia PDF Downloads 446546 Cellulose Nanocrystals from Melon Plant Residues: A Sustainable and Renewable Source
Authors: Asiya Rezzouq, Mehdi El Bouchti, Omar Cherkaoui, Sanaa Majid, Souad Zyade
Abstract:
In recent years, there has been a steady increase in the exploration of new renewable and non-conventional sources for the production of biodegradable nanomaterials. Nature harbours valuable cellulose-rich materials that have so far been under-exploited and can be used to create cellulose derivatives such as cellulose microfibres (CMFs) and cellulose nanocrystals (CNCs). These unconventional sources have considerable potential as alternatives to conventional sources such as wood and cotton. By using agricultural waste to produce these cellulose derivatives, we are responding to the global call for sustainable solutions to environmental and economic challenges. Responsible management of agricultural waste is increasingly crucial to reducing the environmental consequences of its disposal, including soil and water pollution, while making efficient use of these untapped resources. In this study, the main objective was to extract cellulose nanocrystals (CNC) from melon plant residues using methods that are both efficient and sustainable. To achieve this high-quality extraction, we followed a well-defined protocol involving several key steps: pre-treatment of the residues by grinding, filtration and chemical purification to obtain high-quality (CMF) with a yield of 52% relative to the initial mass of the melon plant residue. Acid hydrolysis was then carried out using phosphoric acid and sulphuric acid to convert (CMF) into cellulose nanocrystals. The extracted cellulose nanocrystals were subjected to in-depth characterization using advanced techniques such as transmission electron microscopy (TEM), thermogravimetric analysis (TGA), Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction. The resulting cellulose nanocrystals have exceptional properties, including a large specific surface area, high thermal stability and high mechanical strength, making them suitable for a variety of applications, including as reinforcements for composite materials. In summary, the study highlights the potential for recovering agricultural melon waste to produce high-quality cellulose nanocrystals with promising applications in industry, nanotechnology, and biotechnology, thereby contributing to environmental and economic sustainability.Keywords: cellulose, melon plant residues, cellulose nanocrystals, properties, applications, composite materials
Procedia PDF Downloads 56545 Development of the Squamate Egg Tooth on the Basis of Grass Snake Natrix natrix Studies
Authors: Mateusz Hermyt, Pawel Kaczmarek, Weronika Rupik
Abstract:
The egg tooth is a crucial structure during hatching of lizards and snakes. In contrast to birds, turtles, crocodiles, and monotremes, egg tooth of squamate reptiles is a true tooth sharing common features of structure and development with all the other teeth of vertebrates. The egg tooth; however, due to its function, exhibits structural differences in relation to regular teeth. External morphology seems to be important in the context of phylogenetic relationships within Squamata but up to date, there is scarce information concerning structure and development of the egg tooth at the submicroscopical level. In presented studies detailed analysis of the egg tooth development in grass snake has been performed with the usage of light (including fluorescent), transmission and scanning electron microscopy. Grass snake embryo’s heads have been used in our studies. Grass snake is common snake species occurring in most of Europe including Poland. The grass snake is characterized by the presence of single unpaired egg tooth (as in most squamates) in contrast to geckos and dibamids possessing paired egg teeth. Studies show changes occurring on the external morphology, tissue and cellular levels of differentiating egg tooth. The egg tooth during its development changes its curvature. Initially, faces directly downward and in the course of its differentiation, it gradually changes to rostro-ventral orientation. Additionally, it forms conical dentinal protrusions on the sides. Histological analysis showed that egg tooth development occurs in similar steps in relation to regular teeth. It undergoes initiation, bud, cap and bell morphological stages. Analyses focused on describing morphological changes in hard tissues (mainly dentin and predentin) of egg tooth and in cells which enamel organ consists of. It included: outer enamel epithelium, stratum intermedium, inner enamel epithelium, odontoblasts, and cells of dental pulp. All specimens used in the study were captured according to the Polish regulations concerning the protection of wild species. Permission was granted by the Local Ethics Commission in Katowice (41/2010; 87/2015) and the Regional Directorate for Environmental Protection in Katowice (WPN.6401.257.2015.DC).Keywords: hatching, organogenesis, reptile, Squamata
Procedia PDF Downloads 179544 Molecular Diagnosis of a Virus Associated with Red Tip Disease and Its Detection by Non Destructive Sensor in Pineapple (Ananas comosus)
Authors: A. K. Faizah, G. Vadamalai, S. K. Balasundram, W. L. Lim
Abstract:
Pineapple (Ananas comosus) is a common crop in tropical and subtropical areas of the world. Malaysia once ranked as one of the top 3 pineapple producers in the world in the 60's and early 70's, after Hawaii and Brazil. Moreover, government’s recognition of the pineapple crop as one of priority commodities to be developed for the domestics and international markets in the National Agriculture Policy. However, pineapple industry in Malaysia still faces numerous challenges, one of which is the management of disease and pest. Red tip disease on pineapple was first recognized about 20 years ago in a commercial pineapple stand located in Simpang Renggam, Johor, Peninsular Malaysia. Since its discovery, there has been no confirmation on its causal agent of this disease. The epidemiology of red tip disease is still not fully understood. Nevertheless, the disease symptoms and the spread within the field seem to point toward viral infection. Bioassay test on nucleic acid extracted from the red tip-affected pineapple was done on Nicotiana tabacum cv. Coker by rubbing the extracted sap. Localised lesions were observed 3 weeks after inoculation. Negative staining of the fresh inoculated Nicotiana tabacum cv. Coker showed the presence of membrane-bound spherical particles with an average diameter of 94.25nm under transmission electron microscope. The shape and size of the particles were similar to tospovirus. SDS-PAGE analysis of partial purified virions from inoculated N. tabacum produced a strong and a faint protein bands with molecular mass of approximately 29 kDa and 55 kDa. Partial purified virions of symptomatic pineapple leaves from field showed bands with molecular mass of approximately 29 kDa, 39 kDa and 55kDa. These bands may indicate the nucleocapsid protein identity of tospovirus. Furthermore, a handheld sensor, Greenseeker, was used to detect red tip symptoms on pineapple non-destructively based on spectral reflectance, measured as Normalized Difference Vegetation Index (NDVI). Red tip severity was estimated and correlated with NDVI. Linear regression models were calibrated and tested developed in order to estimate red tip disease severity based on NDVI. Results showed a strong positive relationship between red tip disease severity and NDVI (r= 0.84).Keywords: pineapple, diagnosis, virus, NDVI
Procedia PDF Downloads 791543 Investigation of Wind Farm Interaction with Ethiopian Electric Power’s Grid: A Case Study at Ashegoda Wind Farm
Authors: Fikremariam Beyene, Getachew Bekele
Abstract:
Ethiopia is currently on the move with various projects to raise the amount of power generated in the country. The progress observed in recent years indicates this fact clearly and indisputably. The rural electrification program, the modernization of the power transmission system, the development of wind farm is some of the main accomplishments worth mentioning. As it is well known, currently, wind power is globally embraced as one of the most important sources of energy mainly for its environmentally friendly characteristics, and also that once it is installed, it is a source available free of charge. However, integration of wind power plant with an existing network has many challenges that need to be given serious attention. In Ethiopia, a number of wind farms are either installed or are under construction. A series of wind farm is planned to be installed in the near future. Ashegoda Wind farm (13.2°, 39.6°), which is the subject of this study, is the first large scale wind farm under construction with the capacity of 120 MW. The first phase of 120 MW (30 MW) has been completed and is expected to be connected to the grid soon. This paper is concerned with the investigation of the wind farm interaction with the national grid under transient operating condition. The main concern is the fault ride through (FRT) capability of the system when the grid voltage drops to exceedingly low values because of short circuit fault and also the active and reactive power behavior of wind turbines after the fault is cleared. On the wind turbine side, a detailed dynamic modelling of variable speed wind turbine of a 1 MW capacity running with a squirrel cage induction generator and full-scale power electronics converters is done and analyzed using simulation software DIgSILENT PowerFactory. On the Ethiopian electric power corporation side, after having collected sufficient data for the analysis, the grid network is modeled. In the model, a fault ride-through (FRT) capability of the plant is studied by applying 3-phase short circuit on the grid terminal near the wind farm. The results show that the Ashegoda wind farm can ride from voltage deep within a short time and the active and reactive power performance of the wind farm is also promising.Keywords: squirrel cage induction generator, active and reactive power, DIgSILENT PowerFactory, fault ride-through capability, 3-phase short circuit
Procedia PDF Downloads 172542 Uncertainty Evaluation of Erosion Volume Measurement Using Coordinate Measuring Machine
Authors: Mohamed Dhouibi, Bogdan Stirbu, Chabotier André, Marc Pirlot
Abstract:
Internal barrel wear is a major factor affecting the performance of small caliber guns in their different life phases. Wear analysis is, therefore, a very important process for understanding how wear occurs, where it takes place, and how it spreads with the aim on improving the accuracy and effectiveness of small caliber weapons. This paper discusses the measurement and analysis of combustion chamber wear for a small-caliber gun using a Coordinate Measuring Machine (CMM). Initially, two different NATO small caliber guns: 5.56x45mm and 7.62x51mm, are considered. A Micura Zeiss Coordinate Measuring Machine (CMM) equipped with the VAST XTR gold high-end sensor is used to measure the inner profile of the two guns every 300-shot cycle. The CMM parameters, such us (i) the measuring force, (ii) the measured points, (iii) the time of masking, and (iv) the scanning velocity, are investigated. In order to ensure minimum measurement error, a statistical analysis is adopted to select the reliable CMM parameters combination. Next, two measurement strategies are developed to capture the shape and the volume of each gun chamber. Thus, a task-specific measurement uncertainty (TSMU) analysis is carried out for each measurement plan. Different approaches of TSMU evaluation have been proposed in the literature. This paper discusses two different techniques. The first is the substitution method described in ISO 15530 part 3. This approach is based on the use of calibrated workpieces with similar shape and size as the measured part. The second is the Monte Carlo simulation method presented in ISO 15530 part 4. Uncertainty evaluation software (UES), also known as the Virtual Coordinate Measuring Machine (VCMM), is utilized in this technique to perform a point-by-point simulation of the measurements. To conclude, a comparison between both approaches is performed. Finally, the results of the measurements are verified through calibrated gauges of several dimensions specially designed for the two barrels. On this basis, an experimental database is developed for further analysis aiming to quantify the relationship between the volume of wear and the muzzle velocity of small caliber guns.Keywords: coordinate measuring machine, measurement uncertainty, erosion and wear volume, small caliber guns
Procedia PDF Downloads 150541 Sound Source Localisation and Augmented Reality for On-Site Inspection of Prefabricated Building Components
Authors: Jacques Cuenca, Claudio Colangeli, Agnieszka Mroz, Karl Janssens, Gunther Riexinger, Antonio D'Antuono, Giuseppe Pandarese, Milena Martarelli, Gian Marco Revel, Carlos Barcena Martin
Abstract:
This study presents an on-site acoustic inspection methodology for quality and performance evaluation of building components. The work focuses on global and detailed sound source localisation, by successively performing acoustic beamforming and sound intensity measurements. A portable experimental setup is developed, consisting of an omnidirectional broadband acoustic source and a microphone array and sound intensity probe. Three main acoustic indicators are of interest, namely the sound pressure distribution on the surface of components such as walls, windows and junctions, the three-dimensional sound intensity field in the vicinity of junctions, and the sound transmission loss of partitions. The measurement data is post-processed and converted into a three-dimensional numerical model of the acoustic indicators with the help of the simultaneously acquired geolocation information. The three-dimensional acoustic indicators are then integrated into an augmented reality platform superimposing them onto a real-time visualisation of the spatial environment. The methodology thus enables a measurement-supported inspection process of buildings and the correction of errors during construction and refurbishment. Two experimental validation cases are shown. The first consists of a laboratory measurement on a full-scale mockup of a room, featuring a prefabricated panel. The latter is installed with controlled defects such as lack of insulation and joint sealing material. It is demonstrated that the combined acoustic and augmented reality tool is capable of identifying acoustic leakages from the building defects and assist in correcting them. The second validation case is performed on a prefabricated room at a near-completion stage in the factory. With the help of the measurements and visualisation tools, the homogeneity of the partition installation is evaluated and leakages from junctions and doors are identified. Furthermore, the integration of acoustic indicators together with thermal and geometrical indicators via the augmented reality platform is shown.Keywords: acoustic inspection, prefabricated building components, augmented reality, sound source localization
Procedia PDF Downloads 383540 Magnetoelastically Induced Perpendicular Magnetic Anisotropy and Perpendicular Exchange Bias of CoO/CoPt Multilayer Films
Authors: Guo Lei, Wang Yue, Nakamura Yoshio, Shi Ji
Abstract:
Recently, perpendicular exchange bias (PEB) is introduced as an active topic attracting continuous efforts. Since its discovery, extrinsic control of PEB has been proposed, due to its scientific significance in spintronic devices and potential application in high density magnetic random access memory with perpendicular magnetic tunneling junction (p-MTJ). To our knowledge, the researches aiming to controlling PEB so far are focused mainly on enhancing the interfacial exchange coupling by adjusting the FM/AFM interface roughness, or optimizing the crystalline structures of FM or AFM layer by employing different seed layers. In present work, the effects of magnetoelastically induced PMA on PEB have been explored in [CoO5nm/CoPt5nm]5 multilayer films. We find the PMA strength of FM layer also plays an important role on PEB at the FM/AFM interface and it is effective to control PEB of [CoO5nm/CoPt5nm]5 multilayer films by changing the magnetoelastically induced PMA of CoPt layer. [CoO5nm/CoPt5nm]5 multilayer films were deposited by magnetron sputtering on fused quartz substrate at room temperature, then annealed at 100°C, 250°C, 300°C and 375°C for 3h, respectively. XRD results reveal that all the samples are well crystallized with preferred fcc CoPt (111) orientation. The continuous multilayer structure with sharp component transition at the CoO5nm/CoPt5nm interface are identified clearly by transmission electron microscopy (TEM), x-ray reflectivity (XRR) and atomic force microscope (AFM). CoPt layer in-plane tensile stress is calculated by sin2φ method, and we find it increases gradually upon annealing from 0.99 GPa (as-deposited) up to 3.02 GPa (300oC-annealed). As to the magnetic property, significant enhancement of PMA is achieved in [CoO5nm/CoPt5nm]5 multilayer films after annealing due to the increase of CoPt layer in-plane tensile stress. With the enhancement of magnetoelastically induced PMA, great improvement of PEB is also achieved in [CoO5nm/CoPt5nm]5 multilayer films, which increases from 130 Oe (as-deposited) up to 1060 Oe (300oC-annealed), showing the same change tendency as PMA and the strong correlation with CoPt layer in-plane tensile stress. We consider it is the increase of CoPt layer in-plane tensile stress that leads to the enhancement of PMA, and thus the enhancement of magnetoelastically induced PMA results in the improvement of PEB in [CoO5nm/CoPt5nm]5 multilayer films.Keywords: perpendicular exchange bias, magnetoelastically induced perpendicular magnetic anisotropy, CoO5nm/CoPt5nm]5 multilayer film with in-plane stress, perpendicular magnetic tunneling junction
Procedia PDF Downloads 462539 Computational Fluid Dynamics Simulation of Turbulent Convective Heat Transfer in Rectangular Mini-Channels for Rocket Cooling Applications
Authors: O. Anwar Beg, Armghan Zubair, Sireetorn Kuharat, Meisam Babaie
Abstract:
In this work, motivated by rocket channel cooling applications, we describe recent CFD simulations of turbulent convective heat transfer in mini-channels at different aspect ratios. ANSYS FLUENT software has been employed with a mean average error of 5.97% relative to Forrest’s MIT cooling channel study (2014) at a Reynolds number of 50,443 with a Prandtl number of 3.01. This suggests that the simulation model created for turbulent flow was suitable to set as a foundation for the study of different aspect ratios in the channel. Multiple aspect ratios were also considered to understand the influence of high aspect ratios to analyse the best performing cooling channel, which was determined to be the highest aspect ratio channels. Hence, the approximate 28:1 aspect ratio provided the best characteristics to ensure effective cooling. A mesh convergence study was performed to assess the optimum mesh density to collect accurate results. Hence, for this study an element size of 0.05mm was used to generate 579,120 for proper turbulent flow simulation. Deploying a greater bias factor would increase the mesh density to the furthest edges of the channel which would prove to be useful if the focus of the study was just on a single side of the wall. Since a bulk temperature is involved with the calculations, it is essential to ensure a suitable bias factor is used to ensure the reliability of the results. Hence, in this study we have opted to use a bias factor of 5 to allow greater mesh density at both edges of the channel. However, the limitations on mesh density and hardware have curtailed the sophistication achievable for the turbulence characteristics. Also only linear rectangular channels were considered, i.e. curvature was ignored. Furthermore, we only considered conventional water coolant. From this CFD study the variation of aspect ratio provided a deeper appreciation of the effect of small to high aspect ratios with regard to cooling channels. Hence, when considering an application for the channel, the geometry of the aspect ratio must play a crucial role in optimizing cooling performance.Keywords: rocket channel cooling, ANSYS FLUENT CFD, turbulence, convection heat transfer
Procedia PDF Downloads 150538 Active Power Filters and their Smart Grid Integration - Applications for Smart Cities
Authors: Pedro Esteban
Abstract:
Most installations nowadays are exposed to many power quality problems, and they also face numerous challenges to comply with grid code and energy efficiency requirements. The reason behind this is that they are not designed to support nonlinear, non-balanced, and variable loads and generators that make up a large percentage of modern electric power systems. These problems and challenges become especially critical when designing green buildings and smart cities. These problems and challenges are caused by equipment that can be typically found in these installations like variable speed drives (VSD), transformers, lighting, battery chargers, double-conversion UPS (uninterruptible power supply) systems, highly dynamic loads, single-phase loads, fossil fuel generators and renewable generation sources, to name a few. Moreover, events like capacitor switching (from existing capacitor banks or passive harmonic filters), auto-reclose operations of transmission and distribution lines, or the starting of large motors also contribute to these problems and challenges. Active power filters (APF) are one of the fastest-growing power electronics technologies for solving power quality problems and meeting grid code and energy efficiency requirements for a wide range of segments and applications. They are a high performance, flexible, compact, modular, and cost-effective type of power electronics solutions that provide an instantaneous and effective response in low or high voltage electric power systems. They enable longer equipment lifetime, higher process reliability, improved power system capacity and stability, and reduced energy losses, complying with most demanding power quality and energy efficiency standards and grid codes. There can be found several types of active power filters, including active harmonic filters (AHF), static var generators (SVG), active load balancers (ALB), hybrid var compensators (HVC), and low harmonic drives (LHD) nowadays. All these devices can be used in applications in Smart Cities bringing several technical and economic benefits.Keywords: power quality improvement, energy efficiency, grid code compliance, green buildings, smart cities
Procedia PDF Downloads 112537 Collocation Errors in English as Second Language (ESL) Essay Writing
Authors: Fatima Muhammad Shitu
Abstract:
In language learning, Second language learners like their native speaker counter parts, commit errors in their attempt to achieve competence in the target language. The realm of Collocation has to do with meaning relation between lexical items. In all human language, there is a kind of ‘natural order’ in which words are arranged or relate to one another in sentences so much so that when a word occurs in a given context, the related or naturally co -occurring word will automatically come to the mind. It becomes an error, therefore, if students inappropriately pair or arrange such ‘naturally’ co – occurring lexical items in a text. It has been observed that most of the second language learners in this research group commit collocational errors. A study of this kind is very significant as it gives insight into the kinds of errors committed by learners. This will help the language teacher to be able to identify the sources and causes of such errors as well as correct them thereby guiding, helping and leading the learners towards achieving some level of competence in the language. The aim of the study is to understand the nature of these errors as stumbling blocks to effective essay writing. The objective of the study is to identify the errors, analyse their structural compositions so as to determine whether there are similarities between students in this regard and to find out whether there are patterns to these kinds of errors which will enable the researcher to understand their sources and causes. As a descriptive research, the researcher samples some nine hundred essays collected from three hundred undergraduate learners of English as a second language in the Federal College of Education, Kano, North- West Nigeria, i.e. three essays per each student. The essays which were given on three different lecture times were of similar thematic preoccupations (i.e. same topics) and length (i.e. same number of words). The essays were written during the lecture hour at three different lecture occasions. The errors were identified in a systematic manner whereby errors so identified were recorded only once even if they occur severally in students’ essays. The data was collated using percentages in which the identified number of occurrences were converted accordingly in percentages. The findings from the study indicates that there are similarities as well as regular and repeated errors which provided a pattern. Based on the pattern identified, the conclusion is that students’ collocational errors are attributable to poor teaching and learning which resulted in wrong generalisation of rules.Keywords: collocations, errors, second language learning, ESL students
Procedia PDF Downloads 330536 Dynamic Modeling of the Impact of Chlorine on Aquatic Species in Urban Lake Ecosystem
Authors: Zhiqiang Yan, Chen Fan, Yafei Wang, Beicheng Xia
Abstract:
Urban lakes play an invaluable role in urban water systems such as flood control, water supply, and public recreation. However, over 38% of the urban lakes have suffered from severe eutrophication in China. Chlorine that could remarkably inhibit the growth of phytoplankton in eutrophic, has been widely used in the agricultural, aquaculture and industry in the recent past. However, little information has been reported regarding the effects of chlorine on the lake ecosystem, especially on the main aquatic species.To investigate the ecological response of main aquatic species and system stability to chlorine interference in shallow urban lakes, a mini system dynamic model was developed based on the competition and predation of main aquatic species and total phosphorus circulation. The main species of submerged macrophyte, phytoplankton, zooplankton, benthos, spiroggra and total phosphorus in water and sediment were used as variables in the model,while the interference of chlorine on phytoplankton was represented by an exponential attenuation equation. Furthermore, the eco-exergy expressing the development degree of ecosystem was used to quantify the complexity of the shallow urban lake. The model was validated using the data collected in the Lotus Lake in Guangzhoufrom1 October 2015 to 31 January 2016.The correlation coefficient (R), root mean square error-observations standard deviation ratio (RSR) and index of agreement (IOA) were calculated to evaluate accuracy and reliability of the model.The simulated values showed good qualitative agreement with the measured values of all components. The model results showed that chlorine had a notable inhibitory effect on Microcystis aeruginos,Rachionus plicatilis, Diaphanosoma brachyurum Liévin and Mesocyclops leuckarti (Claus).The outbreak of Spiroggra.spp. inhibited the growth of Vallisneria natans (Lour.) Hara, leading to a gradual decrease of eco-exergy and the breakdown of ecosystem internal equilibria. This study gives important insight into using chlorine to achieve eutrophication control and understand mechanism process.Keywords: system dynamic model, urban lake, chlorine, eco-exergy
Procedia PDF Downloads 234535 Determination of Bromides, Chlorides and Fluorides in Case of Their Joint Presence in Ion-Conducting Electrolyte
Authors: V. Golubeva, O. Vakhnina, I. Konopkina, N. Gerasimova, N. Taturina, K. Zhogova
Abstract:
To improve chemical current sources, the ion-conducting electrolytes based on Li halides (LiCl-KCl, LiCl-LiBr-KBr, LiCl-LiBr-LiF) are developed. It is necessary to have chemical analytical methods for determination of halides to control the electrolytes technology. The methods of classical analytical chemistry are of interest, as they are characterized by high accuracy. Using these methods is a difficult task because halides have similar chemical properties. The objective of this work is to develop a titrimetric method for determining the content of bromides, chlorides, and fluorides in their joint presence in an ion-conducting electrolyte. In accordance with the developed method of analysis to determine fluorides, electrolyte sample is dissolved in diluted HCl acid; fluorides are titrated by La(NO₃)₃ solution with potentiometric indication of equivalence point, fluoride ion-selective electrode is used as sensor. Chlorides and bromides do not form a hardly soluble compound with La and do not interfere in result of analysis. To determine the bromides, the sample is dissolved in a diluted H₂SO₄ acid. The bromides are oxidized with a solution of KIO₃ to Br₂, which is removed from the reaction zone by boiling. Excess of KIO₃ is titrated by iodometric method. The content of bromides is calculated from the amount of KIO₃ spent on Br₂ oxidation. Chlorides and fluorides are not oxidized by KIO₃ and do not interfere in result of analysis. To determine the chlorides, the sample is dissolved in diluted HNO₃ acid and the total content of chlorides and bromides is determined by method of visual mercurometric titration with diphenylcarbazone indicator. Fluorides do not form a hardly soluble compound with mercury and do not interfere with determination. The content of chlorides is calculated taking into account the content of bromides in the sample of electrolyte. The validation of the developed analytical method was evaluated by analyzing internal reference material with known chlorides, bromides and fluorides content. The analytical method allows to determine chlorides, bromides and fluorides in case of their joint presence in ion-conducting electrolyte within the range and with relative total error (δ): for bromides from 60.0 to 65.0 %, δ = ± 2.1 %; for chlorides from 8.0 to 15.0 %, δ = ± 3.6 %; for fluorides from 5.0 to 8.0%, ± 1.5% . The analytical method allows to analyze electrolytes and mixtures that contain chlorides, bromides, fluorides of alkali metals and their mixtures (K, Na, Li).Keywords: bromides, chlorides, fluorides, ion-conducting electrolyte
Procedia PDF Downloads 127534 Mitigation of Risk Management Activities towards Accountability into Microfinance Environment: Malaysian Case Study
Authors: Nor Azlina A. Rahman, Jamaliah Said, Salwana Hassan
Abstract:
Prompt changes in global business environment, such as passionate competition, managerial/operational, changing governmental regulation and innovation in technology have significant impacts on the organizations. At present, global business environment demands for more proactive institutions on microfinance to provide an opportunity for the business success. Microfinance providers in Malaysia still accelerate its activities of funding by cash and cheque. These institutions are at high risk as the paper-based system is deemed to be slow and prone to human error, as well as requiring a major annual reconciliation process. The global transformation of financial services, growing involvement of technology, innovation and new business activities had progressively made risk management profile to be more subjective and diversified. The persistent, complex and dynamic nature of risk management activities in the institutions arise due to highly automated advancements of technology. This may thus manifest in a variety of ways throughout the financial services sector. This study seeks out to examine current operational risks management being experienced by microfinance providers in Malaysia; investigate the process of current practices on facilitator control factor mechanisms, and explore how the adoption of technology, innovation and use of management accounting practices would affect the risk management process of operation system in microfinance providers in Malaysia. A case study method was employed in this study. The case study also need to find that the vital past role of management accounting will be used for mitigation of risk management activities towards accountability as an information or guideline to microfinance provider. An empirical element obtainable with qualitative method is needed in this study, where multipart and in-depth information are essential to understand the issues of these institution phenomena. This study is expected to propose a theoretical model for implementation of technology, innovation and management accounting practices into the system of operation to improve internal control and subsequently lead to mitigation of risk management activities among microfinance providers to be more successful.Keywords: microfinance, accountability, operational risks, management accounting practices
Procedia PDF Downloads 438533 Single and Sequential Extraction for Potassium Fractionation and Nano-Clay Flocculation Structure
Authors: Chakkrit Poonpakdee, Jing-Hua Tzen, Ya-Zhen Huang, Yao-Tung Lin
Abstract:
Potassium (K) is a known macro nutrient and essential element for plant growth. Single leaching and modified sequential extraction schemes have been developed to estimate the relative phase associations of soil samples. The sequential extraction process is a step in analyzing the partitioning of metals affected by environmental conditions, but it is not a tool for estimation of K bioavailability. While, traditional single leaching method has been used to classify K speciation for a long time, it depend on its availability to the plants and use for potash fertilizer recommendation rate. Clay mineral in soil is a factor for controlling soil fertility. The change of the micro-structure of clay minerals during various environment (i.e. swelling or shrinking) is characterized using Transmission X-Ray Microscopy (TXM). The objective of this study are to 1) compare the distribution of K speciation between single leaching and sequential extraction process 2) determined clay particle flocculation structure before/after suspension with K+ using TXM. Four tropical soil samples: farming without K fertilizer (10 years), long term applied K fertilizer (10 years; 168-240 kg K2O ha-1 year-1), red soil (450-500 kg K2O ha-1 year-1) and forest soil were selected. The results showed that the amount of K speciation by single leaching method were high in mineral K, HNO3 K, Non-exchangeable K, NH4OAc K, exchangeable K and water soluble K respectively. Sequential extraction process indicated that most K speciations in soil were associated with residual, organic matter, Fe or Mn oxide and exchangeable fractions and K associate fraction with carbonate was not detected in tropical soil samples. In farming long term applied K fertilizer and red soil were higher exchangeable K than farming long term without K fertilizer and forest soil. The results indicated that one way to increase the available K (water soluble K and exchangeable K) should apply K fertilizer and organic fertilizer for providing available K. The two-dimension of TXM image of clay particles suspension with K+ shows that the aggregation structure of clay mineral closed-void cellular networks. The porous cellular structure of soil aggregates in 1 M KCl solution had large and very larger empty voids than in 0.025 M KCl and deionized water respectively. TXM nanotomography is a new technique can be useful in the field as a tool for better understanding of clay mineral micro-structure.Keywords: potassium, sequential extraction process, clay mineral, TXM
Procedia PDF Downloads 289532 Seismic Response of Structure Using a Three Degree of Freedom Shake Table
Authors: Ketan N. Bajad, Manisha V. Waghmare
Abstract:
Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed
Procedia PDF Downloads 140531 Stress Concentration and Strength Prediction of Carbon/Epoxy Composites
Authors: Emre Ozaslan, Bulent Acar, Mehmet Ali Guler
Abstract:
Unidirectional composites are very popular structural materials used in aerospace, marine, energy and automotive industries thanks to their superior material properties. However, the mechanical behavior of composite materials is more complicated than isotropic materials because of their anisotropic nature. Also, a stress concentration availability on the structure, like a hole, makes the problem further complicated. Therefore, enormous number of tests require to understand the mechanical behavior and strength of composites which contain stress concentration. Accurate finite element analysis and analytical models enable to understand mechanical behavior and predict the strength of composites without enormous number of tests which cost serious time and money. In this study, unidirectional Carbon/Epoxy composite specimens with central circular hole were investigated in terms of stress concentration factor and strength prediction. The composite specimens which had different specimen wide (W) to hole diameter (D) ratio were tested to investigate the effect of hole size on the stress concentration and strength. Also, specimens which had same specimen wide to hole diameter ratio, but varied sizes were tested to investigate the size effect. Finite element analysis was performed to determine stress concentration factor for all specimen configurations. For quasi-isotropic laminate, it was found that the stress concentration factor increased approximately %15 with decreasing of W/D ratio from 6 to 3. Point stress criteria (PSC), inherent flaw method and progressive failure analysis were compared in terms of predicting the strength of specimens. All methods could predict the strength of specimens with maximum %8 error. PSC was better than other methods for high values of W/D ratio, however, inherent flaw method was successful for low values of W/D. Also, it is seen that increasing by 4 times of the W/D ratio rises the failure strength of composite specimen as %62.4. For constant W/D ratio specimens, all the strength prediction methods were more successful for smaller size specimens than larger ones. Increasing the specimen width and hole diameter together by 2 times reduces the specimen failure strength as %13.2.Keywords: failure, strength, stress concentration, unidirectional composites
Procedia PDF Downloads 155530 A Long Range Wide Area Network-Based Smart Pest Monitoring System
Authors: Yun-Chung Yu, Yan-Wen Wang, Min-Sheng Liao, Joe-Air Jiang, Yuen-Chung Lee
Abstract:
This paper proposes to use a Long Range Wide Area Network (LoRaWAN) for a smart pest monitoring system which aims at the oriental fruit fly (Bactrocera dorsalis) to improve the communication efficiency of the system. The oriental fruit fly is one of the main pests in Southeast Asia and the Pacific Rim. Different smart pest monitoring systems based on the Internet of Things (IoT) architecture have been developed to solve problems of employing manual measurement. These systems often use Octopus II, a communication module following the 2.4GHz IEEE 802.15.4 ZigBee specification, as sensor nodes. The Octopus II is commonly used in low-power and short-distance communication. However, the energy consumption increase as the logical topology becomes more complicate to have enough coverage in the large area. By comparison, LoRaWAN follows the Low Power Wide Area Network (LPWAN) specification, which targets the key requirements of the IoT technology, such as secure bi-directional communication, mobility, and localization services. The LoRaWAN network has advantages of long range communication, high stability, and low energy consumption. The 433MHz LoRaWAN model has two superiorities over the 2.4GHz ZigBee model: greater diffraction and less interference. In this paper, The Octopus II module is replaced by a LoRa model to increase the coverage of the monitoring system, improve the communication performance, and prolong the network lifetime. The performance of the LoRa-based system is compared with a ZigBee-based system using three indexes: the packet receiving rate, delay time, and energy consumption, and the experiments are done in different settings (e.g. distances and environmental conditions). In the distance experiment, a pest monitoring system using the two communication specifications is deployed in an area with various obstacles, such as buildings and living creatures, and the performance of employing the two communication specifications is examined. The experiment results show that the packet receiving the rate of the LoRa-based system is 96% , which is much higher than that of the ZigBee system when the distance between any two modules is about 500m. These results indicate the capability of a LoRaWAN-based monitoring system in long range transmission and ensure the stability of the system.Keywords: LoRaWan, oriental fruit fly, IoT, Octopus II
Procedia PDF Downloads 352529 Dynamic Simulation of a Hybrid Wind Farm with Wind Turbines and Distributed Compressed Air Energy Storage System
Authors: Eronini Iheanyi Umez-Eronini
Abstract:
Most studies and existing implementations of compressed air energy storage (CAES) coupled with a wind farm to overcome intermittency and variability of wind power are based on bulk or centralized CAES plants. A dynamic model of a hybrid wind farm with wind turbines and distributed CAES, consisting of air storage tanks and compressor and expander trains at each wind turbine station, is developed and simulated in MATLAB. An ad hoc supervisory controller, in which the wind turbines are simply operated under classical power optimizing region control while scheduling power production by the expanders and air storage by the compressors, including modulation of the compressor power levels within a control range, is used to regulate overall farm power production to track minute-scale (3-minutes sampling period) TSO absolute power reference signal, over an eight-hour period. Simulation results for real wind data input with a simple wake field model applied to a hybrid plant composed of ten 5-MW wind turbines in a row and ten compatibly sized and configured Diabatic CAES stations show the plant controller is able to track the power demand signal within an error band size on the order of the electrical power rating of a single expander. This performance suggests that much improved results should be anticipated when the global D-CAES control is combined with power regulation for the individual wind turbines using available approaches for wind farm active power control. For standalone power plant fuel electrical efficiency estimate of up to 60%, the round trip electrical storage efficiency computed for the distributed CAES wherein heat generated by running compressors is utilized in the preheat stage of running high pressure expanders while fuel is introduced and combusted before the low pressure expanders, was comparable to reported round trip storage electrical efficiencies for bulk Adiabatic CAES.Keywords: hybrid wind farm, distributed CAES, diabatic CAES, active power control, dynamic modeling and simulation
Procedia PDF Downloads 82528 Effect of Different Knee-Joint Positions on Passive Stiffness of Medial Gastrocnemius Muscle and Aponeuroses during Passive Ankle Motion
Authors: Xiyao Shan, Pavlos Evangelidis, Adam Kositsky, Naoki Ikeda, Yasuo Kawakami
Abstract:
The human triceps surae (two bi-articular gastrocnemii and one mono-articular soleus) have aponeuroses in the posterior and anterior aspects of each muscle, where the anterior aponeuroses of the gastrocnemii adjoin the posterior aponeurosis of the soleus, possibly contributing to the intermuscular force transmission between gastrocnemii and soleus. Since the mechanical behavior of these aponeuroses at different knee- and ankle-joint positions remains unclear, the purpose of this study was to clarify this through observations of the localized changes in passive stiffness of the posterior aponeuroses, muscle belly and adjoining aponeuroses of the medial gastrocnemius (MG) induced by different knee and ankle angles. Eleven healthy young males (25 ± 2 yr, 176.7 ± 4.7 cm, 71.1 ± 11.1 kg) participated in this study. Each subject took either a prone position on an isokinetic dynamometer while the knee joint was fully extended (K180) or a kneeling position while the knee joint was 90° flexed (K90), in a randomized and counterbalanced order. The ankle joint was then passively moved through a 50° range of motion (ROM) by the dynamometer from 30° of plantar flexion (PF) to 20° of dorsiflexion (DF) at 2°/s and the ultrasound shear-wave velocity was measured to obtain shear moduli of the posterior aponeurosis, MG belly, and adjoining aponeuroses. The main findings were: 1) shear modulus in K180 was significantly higher (p < 0.05) than K90 for the posterior aponeurosis (across all ankle angles, 10.2 ± 5.7 kPa-59.4 ± 28.7 kPa vs. 5.4 ± 2.2 kPa-11.6 ± 4.1 kPa), MG belly (from PF10° to DF20°, 9.7 ± 2.2 kPa-53.6 ± 18.6 kPa vs. 8.0 ± 2.7 kPa-9.5 ± 3.7 kPa), and adjoining aponeuroses (across all ankle angles, 17.3 ± 7.8 kPa-80 ± 25.7 kPa vs. 12.2 ± 4.5 kPa-52.4 ± 23.0 kPa); 2) shear modulus of the posterior aponeuroses significantly increased (p < 0.05) from PF10° to PF20° in K180, while shear modulus of MG belly significantly increased (p < 0.05) from 0° to PF20° only in K180 and shear modulus of adjoining aponeuroses significantly increased (p < 0.05) across the whole ROM of ankle both in K180 and K90. These results suggest that different knee-joint positions can affect not only the bi-articular gastrocnemius but also influence the mechanical behavior of aponeuroses. In addition, compared to the gradual stiffening of the adjoining aponeuroses across the whole ROM of ankle, the posterior aponeurosis became slack in the plantar flexed positions and then was stiffened gradually as the knee was fully extended. This suggests distinct stiffening for the posterior and adjoining aponeuroses which is joint position-dependent.Keywords: aponeurosis, plantar flexion and dorsiflexion, shear modulus, shear wave elastography
Procedia PDF Downloads 190