Search results for: machine capacity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6786

Search results for: machine capacity

4986 Imparting Second Language Skill through M-Learning

Authors: Subramaniam Chandran, A. Geetha

Abstract:

This paper addresses three issues: how to prepare instructional design for imparting English language skill from inter-disciplinary self-learning material; how the disadvantaged students are benefited from such kind of language skill imparted through m-learning; and how do the m-learners perform better than the other learners. This paper examines these issues through an experimental study conducted among the distance learners enrolled in preparatory program for bachelor’s degree. This program is designed for the disadvantage learners especially for the school drop-outs to qualify to pursue graduate program through distant education. It also explains how mobile learning helps them to enhance their capacity in learning despite their rural background and other disadvantages. In India nearly half of the students enrolled in schools do not complete their study. The pursuance of higher education is very low when compared with developed countries. This study finds a significant increase in their learning capacity and mobile learning seems to be a viable alternative where conventional system could not reach the disadvantaged learners. Improving the English language skill is one of the reasons for such kind of performance. Exercises framed from the relevant self-learning material for enhancing English language skill not only improves language skill but also widens the subject-knowledge. This paper explains these issues out of the study conducted among the disadvantaged learners.

Keywords: English language skill, disadvantaged learners, distance education, m-learning

Procedia PDF Downloads 658
4985 Quality Assessment Of Instant Breakfast Cereals From Yellow Maize (Zea mays), Sesame (Sesamum indicium), And Mushroom (Pleurotusostreatus) Flour Blends

Authors: Mbaeyi-Nwaoha, Ifeoma Elizabeth, Orngu, Africa Orngu

Abstract:

Composite flours were processed from blends of yellow maize (Zea mays), sesame seed (Sesamum indicum) and oyster mushroom (Pleurotus ostreatus) powder in the ratio of 80:20:0; 75:20:5; 70:20:10; 65:20:10 and 60:20:20, respectively to produce the breakfast cereal coded as YSB, SMB, TMB, PMB and OMB with YSB as the control. The breakfast cereals were produced by hydration and toasting of yellow maize and sesame to 160oC for 25 minutes and blended together with oven dried and packaged oyster mushroom. The developed products (flours and breakfast cereals) were analyzed for proximate composition, vitamins, minerals, anti-nutrients, phytochemicals, functional, microbial and sensory properties. Results for the flours showed: proximate composition (%): moisture (2.59-7.27), ash (1.29-7.57), crude fat (0.98-14.91), fibre (1.03-16.02), protein (10.13-35.29), carbohydrate (75.48-38.18) and energy (295.18-410.75kcal). Vitamins ranged as: vitamin A (0.14-9.03 ug/100g), vitamin B1 (0.14-0.38), vitamin B2 (0.07-0.15), vitamin B3(0.89-4.88) and Vitamin C (0.03-4.24). Minerals (mg/100g) were reported thus: calcium (8.01-372.02), potassium (1.40-1.85), magnesium (12.09-13.15), iron (1.23-5.25) and zinc (0.85-2.20). The results for anti-nutrients and phytochemical ranged from: tannin (1.50-1.61mg/g), Phytate (0.40-0.71mg/g), Oxalate(1.81-2.02mg/g), Flavonoid (0.21-1.27%) and phenolic (1.12-2.01%). Functional properties showed: bulk density (0.51-0.77g/ml), water absorption capacity (266.0-301.5%), swelling capacity (136.0-354.0%), least Gelation (0.55-1.45g/g) and reconstitution index (35.20-69.60%). The total viable count ranged from 6.4× 102to1.0× 103cfu/g while the total mold count was from 1.0× 10to 3.0× 10 cfu/g. For the breakfast cereals, proximate composition (%) ranged thus: moisture (4.07-7.08), ash (3.09-2.28), crude fat(16.04-12.83), crude fibre(4.30-8.22), protein(16.14-22.54), carbohydrate(56.34-47.04) and energy (434.34-393.83Kcal).Vitamin A (7.99-5.98 ug/100g), vitamin B1(0.08-0.42mg/100g), vitamin B2(0.06-0.15 mg/100g), vitamin B3(1.91-4.52 mg/100g) and Vitamin C(3.55-3.32 mg/100g) were reported while Minerals (mg/100g) were: calcium (75.31-58.02), potassium (0.65-4.01), magnesium(12.25-12.62), iron (1.21-4.15) and zinc (0.40-1.32). The anti-nutrients and phytochemical revealed the range (mg/g) as: tannin (1.12-1.21), phytate (0.69-0.53), oxalate (1.21-0.43), flavonoid (0.23-1.22%) and phenolic (0.23-1.23%). The bulk density (0.77-0.63g/ml), water absorption capacity (156.5-126.0%), swelling capacity (309.5-249.5%), least gelation (1.10-0.75g/g) and reconstitution index (49.95-39.95%) were recorded. From the total viable count, it ranged from 3.3× 102to4.2× 102cfu/g but no mold growth was detected. Sensory scores revealed that the breakfast cereals were acceptable to the panelist with oyster mushroom supplementation up to 10%.

Keywords: oyster mushroom (Pleurotus ostreatus), sesame seed (Sesamum indicum), yellow maize (Zea mays, instant breakfast cereals

Procedia PDF Downloads 192
4984 An Experimental Study on the Thermal Properties of Concrete Aggregates in Relation to Their Mineral Composition

Authors: Kyung Suk Cho, Heung Youl Kim

Abstract:

The analysis of the petrologic characteristics and thermal properties of crushed aggregates for concrete such as granite, gneiss, dolomite, shale and andesite found that rock-forming minerals decided the thermal properties of the aggregates. The thermal expansion coefficients of aggregates containing lots of quartz increased rapidly at 573 degrees due to quartz transition. The mass of aggregate containing carbonate minerals decreased rapidly at 750 degrees due to decarboxylation, while its specific heat capacity increased relatively. The mass of aggregates containing hydrated silicate minerals decreased more significantly, and their specific heat capacities were greater when compared with aggregates containing feldspar or quartz. It is deduced that the hydroxyl group (OH) in hydrated silicate dissolved as its bond became loose at high temperatures. Aggregates containing mafic minerals turned red at high temperatures due to oxidation response. Moreover, the comparison of cooling methods showed that rapid cooling using water resulted in more reduction in aggregate mass than slow cooling at room temperatures. In order to observe the fire resistance performance of concrete composed of the identical but coarse aggregate, mass loss and compressive strength reduction factor at 200, 400, 600 and 800 degrees were measured. It was found from the analysis of granite and gneiss that the difference in thermal expansion coefficients between cement paste and aggregates caused by quartz transit at 573 degrees resulted in thermal stress inside the concrete and thus triggered concrete cracking. The ferromagnesian hydrated silicate in andesite and shale caused greater reduction in both initial stiffness and mass compared with other aggregates. However, the thermal expansion coefficient of andesite and shale was similar to that of cement paste. Since they were low in thermal conductivity and high in specific heat capacity, concrete cracking was relatively less severe. Being slow in heat transfer, they were judged to be materials of high heat capacity.

Keywords: crush-aggregates, fire resistance, thermal expansion, heat transfer

Procedia PDF Downloads 223
4983 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 175
4982 Study on the Influence of ‘Sports Module’ Teaching on High School Students’ Physical Quality

Authors: Xiaoming Zeng, Xiaozan Wang, Qinping Xu, Shaoxian Wang

Abstract:

Research Purpose: In 2017, the high school physical education and health curriculum standard advocates modular teaching. This study aims to explore the impact of ‘sports module’ teaching on the physical quality of high school students. Research methods: 800 senior high school students (400 in the experimental group and 400 in the control group) were randomly divided into two groups. The experimental group carried out modular teaching of physical education, and the control group carried out conventional teaching mode for one semester. Before and after the experiment, the physical fitness of the subjects was tested, including vital capacity, 50 meters, standing long jump, sitting forward bending. Results: After the experiment, the vital capacity (t = -4.007, p < 0.01), 50 meters (t = 2.638, p < 0.01) and standing long jump (t = -4.067, p < 0.01) of the experimental group were significantly improved. High school sports modular teaching has special characteristics. It attaches great importance to the independent development of students' personality. Students can choose their favorite modules to develop various skills and actively participate in various sports activities in the classroom. The density and intensity of sports are greatly improved. Students' speed (50m run), cardiopulmonary endurance (vital capacity), sensitivity, and strength (standing long jump) scores are greatly improved and obviously improved in nature. But at the same time, it was found that the students' sitting forward flexion did not show significant improvement, which was caused by the lack of relevant equipment in school and the students' inattention to stretching after exercise or not doing regular exercise to promote flexibility. Conclusion: (1) ‘Sports module’ teaching can effectively improve the physical quality of high school students. It is mainly manifested in cardiopulmonary function, speed, and explosive power. (2) In the future, ‘sports module’ teaching should give full play to its advantages and add courses to improve students' flexibility.

Keywords: module teaching, physical quality, senior high school student, sports

Procedia PDF Downloads 112
4981 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities

Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun

Abstract:

As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.

Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning

Procedia PDF Downloads 49
4980 The MicroRNA-2110 Suppressed Cell Proliferation and Migration Capacity in Hepatocellular Carcinoma Cells

Authors: Pelin Balcik Ercin

Abstract:

Introduction: ZEB transcription factor family member ZEB2, has a role in epithelial to mesenchymal transition during development and metastasis. The altered circulating extracellular miRNAs expression is observed in diseases, and extracellular miRNAs have an important role in cancer cell microenvironment. In ChIP-Seq study, the expression of miR-2110 was found to be regulated by ZEB2. In this study, the effects of miR2110 on cell proliferation and migration of hepatocellular carcinoma (HCC) cells were examined. Material and Methods: SNU398 cells transfected with mimic miR2110 (20nM) (HMI0375, Sigma-Aldrich) and negative control miR (HMC0002, Sigma-Aldrich). MicroRNA isolation was accomplished with miRVANA isolation kit according to manufacturer instructions. cDNA synthesis was performed expression, respectively, and calibrated with Ct of controls. The real-time quantitative PCR (RT-qPCR) reaction was performed using the TaqMan Fast Advanced Master Mix (Thermo Sci.). Ct values of miR2110 were normalized to miR-186-5p and miR16-5p for the intracellular gene. Cell proliferation analysis was analyzed with the xCELLigence RTCA System. Wound healing assay was analyzed with the ImageJ program and relative fold change calculated. Results: The mimic-miR-2110 transfected SNU398 cells nearly nine-fold (log2) more miR-2110 expressed compared to negative control transfected cells. The mimic-miR-2110 transfected HCC cell proliferation significantly inhibited compared to the negative control cells. Furthermore, miR-2110-SNU398 cell migration capacity was relatively four-fold decreased compared to negative control-miR-SNU398 cells. Conclusion: Our results suggest the miR-2110 inhibited cell proliferation and also miR-2110 negatively affect cell migration compared to control groups in HCC cells. These data suggest the complexity of microRNA EMT transcription factors regulation. These initial results are pointed out the predictive biomarker capacity of miR-2110 in HCC.

Keywords: epithelial to mesenchymal transition, EMT, hepatocellular carcinoma cells, micro-RNA-2110, ZEB2

Procedia PDF Downloads 119
4979 Performance Analysis in 5th Generation Massive Multiple-Input-Multiple-Output Systems

Authors: Jihad S. Daba, Jean-Pierre Dubois, Georges El Soury

Abstract:

Fifth generation wireless networks guarantee significant capacity enhancement to suit more clients and services at higher information rates with better reliability while consuming less power. The deployment of massive multiple-input-multiple-output technology guarantees broadband wireless networks with the use of base station antenna arrays to serve a large number of users on the same frequency and time-slot channels. In this work, we evaluate the performance of massive multiple-input-multiple-output systems (MIMO) systems in 5th generation cellular networks in terms of capacity and bit error rate. Several cases were considered and analyzed to compare the performance of massive MIMO systems while varying the number of antennas at both transmitting and receiving ends. We found that, unlike classical MIMO systems, reducing the number of transmit antennas while increasing the number of antennas at the receiver end provides a better solution to performance enhancement. In addition, enhanced orthogonal frequency division multiplexing and beam division multiple access schemes further improve the performance of massive MIMO systems and make them more reliable.

Keywords: beam division multiple access, D2D communication, enhanced OFDM, fifth generation broadband, massive MIMO

Procedia PDF Downloads 254
4978 Applying Artificial Neural Networks to Predict Speed Skater Impact Concussion Risk

Authors: Yilin Liao, Hewen Li, Paula McConvey

Abstract:

Speed skaters often face a risk of concussion when they fall on the ice floor and impact crash mats during practices and competitive races. Several variables, including those related to the skater, the crash mat, and the impact position (body side/head/feet impact), are believed to influence the severity of the skater's concussion. While computer simulation modeling can be employed to analyze these accidents, the simulation process is time-consuming and does not provide rapid information for coaches and teams to assess the skater's injury risk in competitive events. This research paper promotes the exploration of the feasibility of using AI techniques for evaluating skater’s potential concussion severity, and to develop a fast concussion prediction tool using artificial neural networks to reduce the risk of treatment delays for injured skaters. The primary data is collected through virtual tests and physical experiments designed to simulate skater-mat impact. It is then analyzed to identify patterns and correlations; finally, it is used to train and fine-tune the artificial neural networks for accurate prediction. The development of the prediction tool by employing machine learning strategies contributes to the application of AI methods in sports science and has theoretical involvements for using AI techniques in predicting and preventing sports-related injuries.

Keywords: artificial neural networks, concussion, machine learning, impact, speed skater

Procedia PDF Downloads 97
4977 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model

Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.

Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma

Procedia PDF Downloads 74
4976 Increasing System Adequacy Using Integration of Pumped Storage: Renewable Energy to Reduce Thermal Power Generations Towards RE100 Target, Thailand

Authors: Mathuravech Thanaphon, Thephasit Nat

Abstract:

The Electricity Generating Authority of Thailand (EGAT) is focusing on expanding its pumped storage hydropower (PSH) capacity to increase the reliability of the system during peak demand and allow for greater integration of renewables. To achieve this requirement, Thailand will have to double its current renewable electricity production. To address the challenges of balancing supply and demand in the grid with increasing levels of RE penetration, as well as rising peak demand, EGAT has already been studying the potential for additional PSH capacity for several years to enable an increased share of RE and replace existing fossil fuel-fired generation. In addition, the role that pumped-storage hydropower would play in fulfilling multiple grid functions and renewable integration. The proposed sites for new PSH would help increase the reliability of power generation in Thailand. However, most of the electricity generation will come from RE, chiefly wind and photovoltaic, and significant additional Energy Storage capacity will be needed. In this paper, the impact of integrating the PSH system on the adequacy of renewable rich power generating systems to reduce the thermal power generating units is investigated. The variations of system adequacy indices are analyzed for different PSH-renewables capacities and storage levels. Power Development Plan 2018 rev.1 (PDP2018 rev.1), which is modified by integrating a six-new PSH system and RE planning and development aftermath in 2030, is the very challenge. The system adequacy indices through power generation are obtained using Multi-Objective Genetic Algorithm (MOGA) Optimization. MOGA is a probabilistic heuristic and stochastic algorithm that is able to find the global minima, which have the advantage that the fitness function does not necessarily require the gradient. In this sense, the method is more flexible in solving reliability optimization problems for a composite power system. The optimization with hourly time step takes years of planning horizon much larger than the weekly horizon that usually sets the scheduling studies. The objective function is to be optimized to maximize RE energy generation, minimize energy imbalances, and minimize thermal power generation using MATLAB. The PDP2018 rev.1 was set to be simulated based on its planned capacity stepping into 2030 and 2050. Therefore, the four main scenario analyses are conducted as the target of renewables share: 1) Business-As-Usual (BAU), 2) National Targets (30% RE in 2030), 3) Carbon Neutrality Targets (50% RE in 2050), and 5) 100% RE or full-decarbonization. According to the results, the generating system adequacy is significantly affected by both PSH-RE and Thermal units. When a PSH is integrated, it can provide hourly capacity to the power system as well as better allocate renewable energy generation to reduce thermal generations and improve system reliability. These results show that a significant level of reliability improvement can be obtained by PSH, especially in renewable-rich power systems.

Keywords: pumped storage hydropower, renewable energy integration, system adequacy, power development planning, RE100, multi-objective genetic algorithm

Procedia PDF Downloads 52
4975 Development of Adsorbents for Removal of Hydrogen Sulfide and Ammonia Using Pyrolytic Carbon Black form Waste Tires

Authors: Yang Gon Seo, Chang-Joon Kim, Dae Hyeok Kim

Abstract:

It is estimated that 1.5 billion tires are produced worldwide each year which will eventually end up as waste tires representing a major potential waste and environmental problem. Pyrolysis has been great interest in alternative treatment processes for waste tires to produce valuable oil, gas and solid products. The oil and gas products may be used directly as a fuel or a chemical feedstock. The solid produced from the pyrolysis of tires ranges typically from 30 to 45 wt% and have high carbon contents of up to 90 wt%. However, most notably the solid have high sulfur contents from 2 to 3 wt% and ash contents from 8 to 15 wt% related to the additive metals. Upgrading tire pyrolysis products to high-value products has concentrated on solid upgrading to higher quality carbon black and to activated carbon. Hydrogen sulfide and ammonia are one of the common malodorous compounds that can be found in emissions from many sewages treatment plants and industrial plants. Therefore, removing these harmful gasses from emissions is of significance in both life and industry because they can cause health problems to human and detrimental effects on the catalysts. In this work, pyrolytic carbon black from waste tires was used to develop adsorbent with good adsorption capacity for removal of hydrogen and ammonia. Pyrolytic carbon blacks were prepared by pyrolysis of waste tire chips ranged from 5 to 20 mm under the nitrogen atmosphere at 600℃ for 1 hour. Pellet-type adsorbents were prepared by a mixture of carbon black, metal oxide and sodium hydroxide or hydrochloric acid, and their adsorption capacities were estimated by using the breakthrough curve of a continuous fixed bed adsorption column at ambient condition. The adsorbent was manufactured with a mixture of carbon black, iron oxide(III), and sodium hydroxide showed the maximum working capacity of hydrogen sulfide. For ammonia, maximum working capacity was obtained by the adsorbent manufactured with a mixture of carbon black, copper oxide(II), and hydrochloric acid.

Keywords: adsorbent, ammonia, pyrolytic carbon black, hydrogen sulfide, metal oxide

Procedia PDF Downloads 253
4974 Noise Measurement and Awareness at Construction Site: A Case Study

Authors: Feiruz Ab'lah, Zarini Ismail, Mohamad Zaki Hassan, Siti Nadia Mohd Bakhori, Mohamad Azlan Suhot, Mohd Yusof Md. Daud, Shamsul Sarip

Abstract:

The construction industry is one of the major sectors in Malaysia. Apart from providing facilities, services, and goods it also offers employment opportunities to local and foreign workers. In fact, the construction workers are exposed to a hazardous level of noises that generated from various sources including excavators, bulldozers, concrete mixer, and piling machines. Previous studies indicated that the piling and concrete work was recorded as the main source that contributed to the highest level of noise among the others. Therefore, the aim of this study is to obtain the noise exposure during piling process and to determine the awareness of workers against noise pollution at the construction site. Initially, the reading of noise was obtained at construction site by using a digital sound level meter (SLM), and noise exposure to the workers was mapped. Readings were taken from four different distances; 5, 10, 15 and 20 meters from the piling machine. Furthermore, a set of questionnaire was also distributed to assess the knowledge regarding noise pollution at the construction site. The result showed that the mean noise level at 5m distance was more than 90 dB which exceeded the recommended level. Although the level of awareness regarding the effect of noise pollution is satisfactory, majority of workers (90%) still did not wear ear protecting device during work period. Therefore, the safety module guidelines related to noise pollution controls should be implemented to provide a safe working environment and prevent initial occupational hearing loss.

Keywords: construction, noise awareness, noise pollution, piling machine

Procedia PDF Downloads 371
4973 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 124
4972 Improved Classification Procedure for Imbalanced and Overlapped Situations

Authors: Hankyu Lee, Seoung Bum Kim

Abstract:

The issue with imbalance and overlapping in the class distribution becomes important in various applications of data mining. The imbalanced dataset is a special case in classification problems in which the number of observations of one class (i.e., major class) heavily exceeds the number of observations of the other class (i.e., minor class). Overlapped dataset is the case where many observations are shared together between the two classes. Imbalanced and overlapped data can be frequently found in many real examples including fraud and abuse patients in healthcare, quality prediction in manufacturing, text classification, oil spill detection, remote sensing, and so on. The class imbalance and overlap problem is the challenging issue because this situation degrades the performance of most of the standard classification algorithms. In this study, we propose a classification procedure that can effectively handle imbalanced and overlapped datasets by splitting data space into three parts: nonoverlapping, light overlapping, and severe overlapping and applying the classification algorithm in each part. These three parts were determined based on the Hausdorff distance and the margin of the modified support vector machine. An experiments study was conducted to examine the properties of the proposed method and compared it with other classification algorithms. The results showed that the proposed method outperformed the competitors under various imbalanced and overlapped situations. Moreover, the applicability of the proposed method was demonstrated through the experiment with real data.

Keywords: classification, imbalanced data with class overlap, split data space, support vector machine

Procedia PDF Downloads 304
4971 Antioxidant Activity of Aristolochia longa L. Extracts

Authors: Merouani Nawel, Belhattab Rachid

Abstract:

Aristolochia longa L. (Aristolochiacea) is a native plant of Algeria used in traditional medicine. This study was devoted to the determination of polyphenols, flavonoids, and condensed tannins contents of Aristolochia longa L. after their extraction by using various solvents with different polarities (methanol, acetone and distilled water). These extracts were prepared from stem, leaves, fruits and rhizome. The antioxidant activity was determined using three in vitro assays methods: scavenging effect on DPPH, the reducing power assay and ẞ-carotene bleaching inhibition (CBI). The results obtained indicate that the acetone extracts from the aerial parts presented the highest contents of polyphenols. The results of The antioxidant activity showed that all extracts of Aristolochia longa L., prepared using different solvent, have diverse antioxidant capacities. However, the aerial parts methanol extract exhibited the highest antioxidant capacity of DPPH and reducing power (Respectively 55,04ug/ml±1,29 and 0,2 mg/ml±0,019 ). Nevertheless, the aerial parts acetone extract showed the highest antioxidant capacity in the test of ẞ-carotene bleaching inhibition with 57%. These preliminary results could be used to justify the traditional use of this plant and their bioactive substances could be exploited for therapeutic purposes such as antioxidant and antimicrobial.

Keywords: aristolochia longa l., polyphenols, flavonoids, condensed tannins, antioxidant activity

Procedia PDF Downloads 242
4970 Effect of Thermal Pretreatment on Functional Properties of Chicken Protein Hydrolysate

Authors: Nutnicha Wongpadungkiat, Suwit Siriwatanayotin, Aluck Thipayarat, Punchira Vongsawasdi, Chotika Viriyarattanasak

Abstract:

Chicken products are major export product of Thailand. With a dramatically increasing consumption of chicken product in the world, there are abundant wastes from chicken meat processing industry. Recently, much research in the development of value-added products from chicken meat industry has focused on the production of protein hydrolysate, utilized as food ingredients for human diet and animal feed. The present study aimed to determine the effect of thermal pre-treatment on functional properties of chicken protein hydrolysate. Chicken breasts were heated at 40, 60, 80 and 100ºC prior to hydrolysis by Alcalase at 60ºC, pH 8 for 4 hr. The hydrolysate was freeze-dried, and subsequently used for assessment of its functional properties molecular weight by gel electrophoresis (SDS-PAGE). The obtained results show that increasing the pre-treatment temperature increased oil holding capacity and emulsion stability while decreasing antioxidant activity and water holding capacity. The SDS-PAGE analysis showed the evidence of protein aggregation in the hydrolysate treated at the higher pre-treatment temperature. These results suggest the connection between molecular weight of the hydrolysate and its functional properties.

Keywords: chicken protein hydrolysate, enzymatic hydrolysis, thermal pretreatment, functional properties

Procedia PDF Downloads 262
4969 Smart Disassembly of Waste Printed Circuit Boards: The Role of IoT and Edge Computing

Authors: Muhammad Mohsin, Fawad Ahmad, Fatima Batool, Muhammad Kaab Zarrar

Abstract:

The integration of the Internet of Things (IoT) and edge computing devices offers a transformative approach to electronic waste management, particularly in the dismantling of printed circuit boards (PCBs). This paper explores how these technologies optimize operational efficiency and improve environmental sustainability by addressing challenges such as data security, interoperability, scalability, and real-time data processing. Proposed solutions include advanced machine learning algorithms for predictive maintenance, robust encryption protocols, and scalable architectures that incorporate edge computing. Case studies from leading e-waste management facilities illustrate benefits such as improved material recovery efficiency, reduced environmental impact, improved worker safety, and optimized resource utilization. The findings highlight the potential of IoT and edge computing to revolutionize e-waste dismantling and make the case for a collaborative approach between policymakers, waste management professionals, and technology developers. This research provides important insights into the use of IoT and edge computing to make significant progress in the sustainable management of electronic waste

Keywords: internet of Things, edge computing, waste PCB disassembly, electronic waste management, data security, interoperability, machine learning, predictive maintenance, sustainable development

Procedia PDF Downloads 18
4968 Atomic Scale Storage Mechanism Study of the Advanced Anode Materials for Lithium-Ion Batteries

Authors: Xi Wang, Yoshio Bando

Abstract:

Lithium-ion batteries (LIBs) can deliver high levels of energy storage density and offer long operating lifetimes, but their power density is too low for many important applications. Therefore, we developed some new strategies and fabricated novel electrodes for fast Li transport and its facile synthesis including N-doped graphene-SnO2 sandwich papers, bicontinuous nanoporous Cu/Li4Ti5O12 electrode, and binder-free N-doped graphene papers. In addition, by using advanced in-TEM, STEM techniques and the theoretical simulations, we systematically studied and understood their storage mechanisms at the atomic scale, which shed a new light on the reasons of the ultrafast lithium storage property and high capacity for these advanced anodes. For example, by using advanced in-situ TEM, we directly investigated these processes using an individual CuO nanowire anode and constructed a LIB prototype within a TEM. Being promising candidates for anodes in lithium-ion batteries (LIBs), transition metal oxide anodes utilizing the so-called conversion mechanism principle typically suffer from the severe capacity fading during the 1st cycle of lithiation–delithiation. Also we report on the atomistic insights of the GN energy storage as revealed by in situ TEM. The lithiation process on edges and basal planes is directly visualized, the pyrrolic N "hole" defect and the perturbed solid-electrolyte-interface (SEI) configurations are observed, and charge transfer states for three N-existing forms are also investigated. In situ HRTEM experiments together with theoretical calculations provide a solid evidence that enlarged edge {0001} spacings and surface "hole" defects result in improved surface capacitive effects and thus high rate capability and the high capacity is owing to short-distance orderings at the edges during discharging and numerous surface defects; the phenomena cannot be understood previously by standard electron or X-ray diffraction analyses.

Keywords: in-situ TEM, STEM, advanced anode, lithium-ion batteries, storage mechanism

Procedia PDF Downloads 348
4967 Evaluating the Performance of Organic, Inorganic and Liquid Sheep Manure on Growth, Yield and Nutritive Value of Hybrid Napier CO-3

Authors: F. A. M. Safwan, H. N. N. Dilrukshi, P. U. S. Peiris

Abstract:

Less availability of high quality green forages leads to low productivity of national dairy herd of Sri Lanka. Growing grass and fodder to suit the production system is an efficient and economical solution for this problem. CO-3 is placed in a higher category, especially on tillering capacity, green forage yield, regeneration capacity, leaf to stem ratio, high crude protein content, resistance to pests and diseases and free from adverse factors along with other fodder varieties grown within the country. An experiment was designed to determine the effect of organic sheep manure, inorganic fertilizers and liquid sheep manure on growth, yield and nutritive value of CO-3. The study was consisted with three treatments; sheep manure (T1), recommended inorganic fertilizers (T2) and liquid sheep manure (T3) which was prepared using bucket fermentation method and each treatment was consisted with three replicates and those were assigned randomly. First harvest was obtained after 40 days of plant establishment and number of leaves (NL), leaf area (LA), tillering capacity (TC), fresh weight (FW) and dry weight (DW) were recorded and second harvest was obtained after 30 days of first harvest and same set of data were recorded. SPSS 16 software was used for data analysis. For proximate analysis AOAC, 2000 standard methods were used. Results revealed that the plants treated with T1 recorded highest NL, LA, TC, FW and DW and were statistically significant at first and second harvest of CO-3 (p˂ 0.05) and it was found that T1 was statistically significant from T2 and T3. Although T3 was recorded higher than the T2 in almost all growth parameters; it was not statistically significant (p ˃0.05). In addition, the crude protein content was recorded highest in T1 with the value of 18.33±1.61 and was lowest in T2 with the value of 10.82±1.14 and was statistically significant (p˂ 0.05). Apart from this, other proximate composition crude fiber, crude fat, ash, moisture content and dry matter were not statistically significant between treatments (p ˃0.05). In accordance with the results, it was found that the organic fertilizer is the best fertilizer for CO-3 in terms of growth parameters and crude protein content.

Keywords: fertilizer, growth parameters, Hybrid Napier CO-3, proximate composition

Procedia PDF Downloads 282
4966 A Machine Learning Approach for Earthquake Prediction in Various Zones Based on Solar Activity

Authors: Viacheslav Shkuratskyy, Aminu Bello Usman, Michael O’Dea, Saifur Rahman Sabuj

Abstract:

This paper examines relationships between solar activity and earthquakes; it applied machine learning techniques: K-nearest neighbour, support vector regression, random forest regression, and long short-term memory network. Data from the SILSO World Data Center, the NOAA National Center, the GOES satellite, NASA OMNIWeb, and the United States Geological Survey were used for the experiment. The 23rd and 24th solar cycles, daily sunspot number, solar wind velocity, proton density, and proton temperature were all included in the dataset. The study also examined sunspots, solar wind, and solar flares, which all reflect solar activity and earthquake frequency distribution by magnitude and depth. The findings showed that the long short-term memory network model predicts earthquakes more correctly than the other models applied in the study, and solar activity is more likely to affect earthquakes of lower magnitude and shallow depth than earthquakes of magnitude 5.5 or larger with intermediate depth and deep depth.

Keywords: k-nearest neighbour, support vector regression, random forest regression, long short-term memory network, earthquakes, solar activity, sunspot number, solar wind, solar flares

Procedia PDF Downloads 67
4965 Community Base Peacebuilding in Fragile Context

Authors: Nizar Ahmad

Abstract:

Peace without community participation will remain a vision, so, this study presents the contribution and efforts made by community base organization in views of local conflict affect population in Pakhtun society. A four conflict affected villages of Malakad Division were selected and a sample size of 278 household respondents were determined through online survey system software out of total 982 households. A Chi-square test was applied to ascertain association between various communication base organizations factors with state of peace in the area. It was found that provision of humanitarian aid, rehabilitation of displaced population, rebuilding of trust in government and peace festivals by communication organization had significant association with state of peace in the area. In contrast provision of training, peace education monitoring and reporting of human rights violation in war zone by local organization was non-significantly related to the state of peace in the area. Community base organization play an active role in building peace in the area but lack capacity, linkages with external actors and outside support. National and international organization actors working in the area of peace and conflict resolution need to focus on the capacity, networking and peace initiatives of local organizations working in fragile context.

Keywords: community base peacebuilding, conflict resolution, terrorism, violence

Procedia PDF Downloads 272
4964 A Study on an Evacuation Test to Measure Delay Time in Using an Evacuation Elevator

Authors: Kyungsuk Cho, Seungun Chae, Jihun Choi

Abstract:

Elevators are examined as one of evacuation methods in super-tall buildings. However, data on the use of elevators for evacuation at a fire are extremely scarce. Therefore, a test to measure delay time in using an evacuation elevator was conducted. In the test, time taken to get on and get off an elevator was measured and the case in which people gave up boarding when the capacity of the elevator was exceeded was also taken into consideration. 170 men and women participated in the test, 130 of whom were young people (20 ~ 50 years old) and 40 were senior citizens (over 60 years old). The capacity of the elevator was 25 people and it travelled between the 2nd and 4th floors. A video recording device was used to analyze the test. An elevator at an ordinary building, not a super-tall building, was used in the test to measure delay time in getting on and getting off an elevator. In order to minimize interference from other elements, elevator platforms on the 2nd and 4th floors were partitioned off. The elevator travelled between the 2nd and 4th floors where people got on and off. If less than 20 people got on the elevator which was empty, the data were excluded. If the elevator carrying 10 passengers stopped and less than 10 new passengers got on the elevator, the data were excluded. Getting-on an empty elevator was observed 49 times. The average number of passengers was 23.7, it took 14.98 seconds for the passengers to get on the empty elevator and the load factor was 1.67 N/s. It took the passengers, whose average number was 23.7, 10.84 seconds to get off the elevator and the unload factor was 2.33 N/s. When an elevator’s capacity is exceeded, the excessive number of people should get off. Time taken for it and the probability of the case were measure in the test. 37% of the times of boarding experienced excessive number of people. As the number of people who gave up boarding increased, the load factor of the ride decreased. When 1 person gave up boarding, the load factor was 1.55 N/s. The case was observed 10 times, which was 12.7% of the total. When 2 people gave up boarding, the load factor was 1.15 N/s. The case was observed 7 times, which was 8.9% of the total. When 3 people gave up boarding, the load factor was 1.26 N/s. The case was observed 4 times, which was 5.1% of the total. When 4 people gave up boarding, the load factor was 1.03 N/s. The case was observed 5 times, which was 6.3% of the total. Getting-on and getting-off time data for people who can walk freely were obtained from the test. In addition, quantitative results were obtained from the relation between the number of people giving up boarding and time taken for getting on. This work was supported by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC-16-02-KICT).

Keywords: evacuation elevator, super tall buildings, evacuees, delay time

Procedia PDF Downloads 173
4963 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 55
4962 Reliability and Cost Focused Optimization Approach for a Communication Satellite Payload Redundancy Allocation Problem

Authors: Mehmet Nefes, Selman Demirel, Hasan H. Ertok, Cenk Sen

Abstract:

A typical reliability engineering problem regarding communication satellites has been considered to determine redundancy allocation scheme of power amplifiers within payload transponder module, whose dominant function is to amplify power levels of the received signals from the Earth, through maximizing reliability against mass, power, and other technical limitations. Adding each redundant power amplifier component increases not only reliability but also hardware, testing, and launch cost of a satellite. This study investigates a multi-objective approach used in order to solve Redundancy Allocation Problem (RAP) for a communication satellite payload transponder, focusing on design cost due to redundancy and reliability factors. The main purpose is to find the optimum power amplifier redundancy configuration satisfying reliability and capacity thresholds simultaneously instead of analyzing respectively or independently. A mathematical model and calculation approach are instituted including objective function definitions, and then, the problem is solved analytically with different input parameters in MATLAB environment. Example results showed that payload capacity and failure rate of power amplifiers have remarkable effects on the solution and also processing time.

Keywords: communication satellite payload, multi-objective optimization, redundancy allocation problem, reliability, transponder

Procedia PDF Downloads 259
4961 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 169
4960 Trimma: Trimming Metadata Storage and Latency for Hybrid Memory Systems

Authors: Yiwei Li, Boyu Tian, Mingyu Gao

Abstract:

Hybrid main memory systems combine both performance and capacity advantages from heterogeneous memory technologies. With larger capacities, higher associativities, and finer granularities, hybrid memory systems currently exhibit significant metadata storage and lookup overheads for flexibly remapping data blocks between the two memory tiers. To alleviate the inefficiencies of existing designs, we propose Trimma, the combination of a multi-level metadata structure and an efficient metadata cache design. Trimma uses a multilevel metadata table to only track truly necessary address remap entries. The saved memory space is effectively utilized as extra DRAM cache capacity to improve performance. Trimma also uses separate formats to store the entries with non-identity and identity mappings. This improves the overall remap cache hit rate, further boosting the performance. Trimma is transparent to software and compatible with various types of hybrid memory systems. When evaluated on a representative DDR4 + NVM hybrid memory system, Trimma achieves up to 2.4× and on average 58.1% speedup benefits, compared with a state-of-the-art design that only leverages the unallocated fast memory space for caching. Trimma addresses metadata management overheads and targets future scalable large-scale hybrid memory architectures.

Keywords: memory system, data cache, hybrid memory, non-volatile memory

Procedia PDF Downloads 66
4959 Carrying Capacity Estimation for Small Hydro Plant Located in Torrential Rivers

Authors: Elena Carcano, James Ball, Betty Tiko

Abstract:

Carrying capacity refers to the maximum population that a given level of resources can sustain over a specific period. In undisturbed environments, the maximum population is determined by the availability and distribution of resources, as well as the competition for their utilization. This information is typically obtained through long-term data collection. In regulated environments, where resources are artificially modified, populations must adapt to changing conditions, which can lead to additional challenges due to fluctuations in resource availability over time and throughout development. An example of this is observed in hydropower plants, which alter water flow and impact fish migration patterns and behaviors. To assess how fish species can adapt to these changes, specialized surveys are conducted, which provide valuable information on fish populations, sample sizes, and density before and after flow modifications. In such situations, it is highly recommended to conduct hydrological and biological monitoring to gain insight into how flow reductions affect species adaptability and to prevent unfavorable exploitation conditions. This analysis involves several planned steps that help design appropriate hydropower production while simultaneously addressing environmental needs. Consequently, the study aims to strike a balance between technical assessment, biological requirements, and societal expectations. Beginning with a small hydro project that requires restoration, this analysis focuses on the lower tail of the Flow Duration Curve (FDC), where both hydrological and environmental goals can be met. The proposed approach involves determining the threshold condition that is tolerable for the most vulnerable species sampled (Telestes Muticellus) by identifying a low flow value from the long-term FDC. The results establish a practical connection between hydrological and environmental information and simplify the process by establishing a single reference flow value that represents the minimum environmental flow that should be maintained.

Keywords: carrying capacity, fish bypass ladder, long-term streamflow duration curve, eta-beta method, environmental flow

Procedia PDF Downloads 26
4958 Sustainable Hydrogel Nanocomposites Based on Grafted Chitosan and Clay for Effective Adsorption of Cationic Dye

Authors: H. Ferfera-Harrar, T. Benhalima, D. Lerari

Abstract:

Contamination of water, due to the discharge of untreated industrial wastewaters into the ecosystem, has become a serious problem for many countries. In this study, bioadsorbents based on chitosan-g-poly(acrylamide) and montmorillonite (MMt) clay (CTS-g-PAAm/MMt) hydrogel nanocomposites were prepared via free‐radical grafting copolymerization and crosslinking of acrylamide monomer (AAm) onto natural polysaccharide chitosan (CTS) as backbone, in presence of various contents of MMt clay as nanofiller. Then, they were hydrolyzed to obtain highly functionalized pH‐sensitive nanomaterials with uppermost swelling properties. Their structure characterization was conducted by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) analyses. The adsorption performances of the developed nanohybrids were examined for removal of methylene blue (MB) cationic dye from aqueous solutions. The factors affecting the removal of MB, such as clay content, pH medium, adsorbent dose, initial dye concentration and temperature were explored. The adsorption process was found to be highly pH dependent. From adsorption kinetic results, the prepared adsorbents showed remarkable adsorption capacity and fast adsorption rate, mainly more than 88% of MB removal efficiency was reached after 50 min in 200 mg L-1 of dye solution. In addition, the incorporating of various content of clay has enhanced adsorption capacity of CTS-g-PAAm matrix from 1685 to a highest value of 1749 mg g-1 for the optimized nanocomposite containing 2 wt.% of MMt. The experimental kinetic data were well described by the pseudo-second-order model, while the equilibrium data were represented perfectly by Langmuir isotherm model. The maximum Langmuir equilibrium adsorption capacity (qm) was found to increase from 2173 mg g−1 until 2221 mg g−1 by adding 2 wt.% of clay nanofiller. Thermodynamic parameters revealed the spontaneous and endothermic nature of the process. In addition, the reusability study revealed that these bioadsorbents could be well regenerated with desorption efficiency overhead 87% and without any obvious decrease of removal efficiency as compared to starting ones even after four consecutive adsorption/desorption cycles, which exceeded 64%. These results suggest that the optimized nanocomposites are promising as low cost bioadsorbents.

Keywords: chitosan, clay, dye adsorption, hydrogels nanocomposites

Procedia PDF Downloads 118
4957 Mixed Alumina-Silicate Materials for Groundwater Remediation

Authors: Ziyad Abunada, Abir Al-tabbaa

Abstract:

The current work is investigating the effectiveness of combined mixed materials mainly modified bentonites and organoclay in treating contaminated groundwater. Sodium bentonite was manufactured with a quaternary amine surfactant, dimethyl ammonium chloride to produce organoclay (OC). Inorgano-organo bentonite (IOB) was produced by intercalating alkylbenzyd-methyl-ammonium chloride surfactant into sodium bentonite and pillared with chlorohydrol pillaring agent. The materials efficiency was tested for both TEX compounds from model-contaminated water and a mixture of organic contaminants found in groundwater samples collected from a contaminated site in the United Kingdom. The sorption data was fitted well to both Langmuir and Freundlich adsorption models reflecting the double sorption model where the correlation coefficient was greater than 0.89 for all materials. The mixed materials showed higher sorptive capacity than individual material with a preference order of X> E> T and a maximum sorptive capacity of 21.8 mg/g was reported for IOB-OC materials for o-xylene. The mixed materials showed at least two times higher affinity towards a mixture of organic contaminants in groundwater samples. Other experimental parameters such as pH and contact time were also investigated. The pseudo-second-order rate equation was able to provide the best description of adsorption kinetics.

Keywords: modified bentobite, groundwater, adsorption, contaminats

Procedia PDF Downloads 219