Search results for: input processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5557

Search results for: input processing

1537 Anti-Nutritional Factors, In-Vitro Trypsin, Chymotrypsin and Peptidase Multi Enzyme Protein Digestibility of Some Melon (Egusi) Seeds and Their Protein Isolates

Authors: Joan O. Ogundele, Aladesanmi A. Oshodi, Adekunle I. Amoo

Abstract:

Abstract In-vitro multi-enzyme protein digestibility (IVMPD) and some anti-nutritional factors (ANF) of five melon (egusi) seed flours (MSF) and their protein isolates (PI) were carried out. Their PI have potentials comparable to that of soya beans. It is important to know the IVMPD and ANF of these protein sources as to ensure their safety when adapted for use as alternate protein sources to substitute for cow milk, which is relatively expensive in Nigeria. Standard methods were used to produce PI of Citrullus colocynthis, Citrullus vulgaris, African Wine Kettle gourd (Lageneria siceraria I), Basket Ball gourd (Lagenaria siceraria II) and Bushel Giant Gourd (Lageneria siceraria III) seeds and to determine the ANF and IVMPD of the MSF and PI unheated and at 37oC. Multi-enzymes used were trypsin, chymotrypsin and peptidase. IVMPD of MSF ranged from (70.67±0.70) % (C. vulgaris) to (72.07± 1.79) % (L.siceraria I) while for their PI ranged from 74.33% (C.vulgaris) to 77.55% (L.siceraria III). IVMPD of the PI were higher than those of MSF. Heating increased IVMPD of MSF with average value of 79.40% and those of PI with average of 84.14%. ANF average in MSF are tannin (0.11mg/g), phytate (0.23%). Differences in IVMPD of MSF and their PI at different temperatures may arise from processing conditions that alter the release of amino acids from proteins by enzymatic processes. ANF in MSF were relatively low, but were found to be lower in the PI, therefor making the PI safer for human consumption as an alternate source of protein.

Keywords: Anti-nutrients, Enzymatic protein digestibility, Melon (egusi)., Protein Isolates.

Procedia PDF Downloads 108
1536 Treatment of Leather Industry Wastewater with Advance Treatment Methods

Authors: Seval Yilmaz, Filiz Bayrakci Karel, Ali Savas Koparal

Abstract:

Textile products produced by leather have been indispensable for human consumption. Various chemicals are used to enhance the durability of end-products in the processing of leather products. The wastewaters from the leather industry which contain these chemicals exhibit toxic effects on the receiving environment and threaten the natural ecosystem. In this study, leather industry wastewater (LIW), which has high loads of contaminants, was treated using advanced treatment techniques instead of conventional methods. During the experiments, the performance of electrochemical methods was investigated. During the electrochemical experiments, the performance of batch electrooxidation (EO) using boron-doped diamond (BDD) electrodes with monopolar configuration for removal of chemical oxygen demand (COD) from LIW were investigated. The influences of electrolysis time, current density (which varies as 5 mA/cm², 10 mA/cm², 20 mA/cm², 30 mA/cm², 50 mA/cm²) and initial pH (which varies as 3,80 (natural pH of LIW), 7, 9) on removal efficiency were investigated in a batch stirred cell to determine the best treatment conditions. The current density applied to the electrochemical reactors is directly proportional to the consumption of electric energy, so electrical energy consumption was monitored during the experiment. The best experimental conditions obtained in electrochemical studies were as follows: electrolysis time = 60 min, current density = 30.0 mA/cm², pH 7. Using these parameters, 53.59% COD removal rates for LIW was achieved and total energy consumption was obtained as 13.03 kWh/m³. It is concluded that electrooxidation process constitutes a plausible and developable method for the treatment of LIW.

Keywords: BDD electrodes, COD removal, electrochemical treatment, leather industry wastewater

Procedia PDF Downloads 152
1535 Experimental Evaluation of Foundation Settlement Mitigations in Liquefiable Soils using Press-in Sheet Piling Technique: 1-g Shake Table Tests

Authors: Md. Kausar Alam, Ramin Motamed

Abstract:

The damaging effects of liquefaction-induced ground movements have been frequently observed in past earthquakes, such as the 2010-2011 Canterbury Earthquake Sequence (CES) in New Zealand and the 2011 Tohoku earthquake in Japan. To reduce the consequences of soil liquefaction at shallow depths, various ground improvement techniques have been utilized in engineering practice, among which this research is focused on experimentally evaluating the press-in sheet piling technique. The press-in sheet pile technique eliminates the vibration, hammering, and noise pollution associated with dynamic sheet pile installation methods. Unfortunately, there are limited experimental studies on the press-in sheet piling technique for liquefaction mitigation using 1g shake table tests in which all the controlling mechanisms of liquefaction-induced foundation settlement, including sand ejecta, can be realistically reproduced. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada, Reno, to evaluate the performance of this technique in liquefiable soil layers. First, a 1/5 size model was developed based on a recent UC San Diego shaking table experiment. The scaled model has a density of 50% for the top crust, 40% for the intermediate liquefiable layer, and 85% for the bottom dense layer. Second, a shallow foundation is seated atop an unsaturated sandy soil crust. Third, in a series of tests, a sheet pile with variable embedment depth is inserted into the liquefiable soil using the press-in technique surrounding the shallow foundations. The scaled models are subjected to harmonic input motions with amplitude and dominant frequency properly scaled based on the large-scale shake table test. This study assesses the performance of the press-in sheet piling technique in terms of reductions in the foundation movements (settlement and tilt) and generated excess pore water pressures. In addition, this paper discusses the cost-effectiveness and carbon footprint features of the studied mitigation measures.

Keywords: excess pore water pressure, foundation settlement, press-in sheet pile, soil liquefaction

Procedia PDF Downloads 88
1534 Cooperation of Unmanned Vehicles for Accomplishing Missions

Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin

Abstract:

The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.

Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning

Procedia PDF Downloads 121
1533 Electrodeposition and Selenization of Cuin Alloys for the Synthesis of Photoactive Cu2in1-X Gax Se2 (Cigs) Thin Films

Authors: Mohamed Benaicha, Mahdi Allam

Abstract:

A new two stage electrochemical process as a safe, large area and low processing cost technique for the production of semi-conducting CuInSe2 (CIS) thin films is studied. CuIn precursors were first potentiostatically electrodeposited onto molybdenum substrates from an acidic thiocyanate electrolyte. In a second stage, the prepared metallic CuIn layers were used as substrate in the selenium electrochemical deposition system and subjected to a thermal treatment in vacuum atmosphere, to eliminate binary phase formation by reaction of the Cu2-x Se and InxSey selenides, leading to the formation of CuInSe2 thin film. Electrochemical selenization from aqueous electrolyte is introduced as an alternative to toxic and hazardous H2Se or Se vapor phase selenization used in physical techniques. In this study, the influence of film deposition parameters such as bath composition, temperature and potential on film properties was studied. The electrochemical, morphological, structural and compositional properties of electrodeposited thin films were characterized using various techniques. Results of Cyclic and Stripping-Cyclic Voltammetry (CV, SCV), Scanning Electron Microscopy (SEM) and Energy Dispersive X-Ray microanalysis (EDX) investigations revealed good reproducibility and homogeneity of the film composition. Thereby optimal technological parameters for the electrochemical production of CuIn, Se as precursors for CuInSe2 thin layers are determined.

Keywords: photovoltaic, CIGS, copper alloys, electrodeposition, thin films

Procedia PDF Downloads 452
1532 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential

Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen

Abstract:

Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.

Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance

Procedia PDF Downloads 384
1531 Ferulic Acid-Grafted Chitosan: Thermal Stability and Feasibility as an Antioxidant for Active Biodegradable Packaging Film

Authors: Sarekha Woranuch, Rangrong Yoksan

Abstract:

Active packaging has been developed based on the incorporation of certain additives, in particular antimicrobial and antioxidant agents, into packaging systems to maintain or extend product quality and shelf-life. Ferulic acid is one of the most effective natural phenolic antioxidants, which has been used in food, pharmaceutical and active packaging film applications. However, most phenolic compounds are sensitive to oxygen, light and heat; its activities are thus lost during product formulation and processing. Grafting ferulic acid onto polymer is an alternative to reduce its loss under thermal processes. Therefore, the objectives of the present research were to study the thermal stability of ferulic acid after grafting onto chitosan, and to investigate the possibility of using ferulic acid-grafted chitosan (FA-g-CTS) as an antioxidant for active biodegradable packaging film. FA-g-CTS was incorporated into biodegradable film via a two-step process, i.e. compounding extrusion at temperature up to 150 °C followed by blown film extrusion at temperature up to 175 °C. Although incorporating FA-g-CTS with a content of 0.02–0.16% (w/w) caused decreased water vapor barrier property and reduced extensibility, the films showed improved oxygen barrier property and antioxidant activity. Radical scavenging activity and reducing power of the film containing FA-g-CTS with a content of 0.04% (w/w) were higher than that of the naked film about 254% and 94%, respectively. Tensile strength and rigidity of the films were not significantly affected by adding FA-g-CTS with a content of 0.02–0.08% (w/w). The results indicated that FA-g-CTS could be potentially used as an antioxidant for active packaging film.

Keywords: active packaging film, antioxidant activity, chitosan, ferulic acid

Procedia PDF Downloads 498
1530 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business

Authors: Kritchakhris Na-Wattanaprasert

Abstract:

The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.

Keywords: key performance indicator, warehouse management, warehouse operation, logistics management

Procedia PDF Downloads 424
1529 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification

Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro

Abstract:

Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.

Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification

Procedia PDF Downloads 104
1528 Spatial Information and Urbanizing Futures

Authors: Mohammad Talei, Neda Ranjbar Nosheri, Reza Kazemi Gorzadini

Abstract:

Today municipalities are searching for the new tools for increasing the public participation in different levels of urban planning. This approach of urban planning involves the community in planning process using participatory approaches instead of the long traditional top-down planning methods. These tools can be used to obtain the particular problems of urban furniture form the residents’ point of view. One of the tools that is designed with this goal is public participation GIS (PPGIS) that enables citizen to record and following up their feeling and spatial knowledge regarding main problems of the city, specifically urban furniture, in the form of maps. However, despite the good intentions of PPGIS, its practical implementation in developing countries faces many problems including the lack of basic supporting infrastructure and services and unavailability of sophisticated public participatory models. In this research we develop a PPGIS using of Web 2 to collect voluntary geodataand to perform spatial analysis based on Spatial OnLine Analytical Processing (SOLAP) and Spatial Data Mining (SDM). These tools provide urban planners with proper informationregarding the type, spatial distribution and the clusters of reported problems. This system is implemented in a case study area in Tehran, Iran and the challenges to make it applicable and its potential for real urban planning have been evaluated. It helps decision makers to better understand, plan and allocate scarce resources for providing most requested urban furniture.

Keywords: PPGIS, spatial information, urbanizing futures, urban planning

Procedia PDF Downloads 716
1527 Reliability-Centered Maintenance Application for the Development of Maintenance Strategy for a Cement Plant

Authors: Nabil Hameed Al-Farsi

Abstract:

This study’s main goal is to develop a model and a maintenance strategy for a cement factory called Arabian Cement Company, Rabigh Plant. The proposed work here depends on Reliability centric maintenance approach to develop a strategy and maintenance schedule that ensures increasing the reliability of the production system components, thus ensuring continuous productivity. The cost-effective maintenance of the plant’s dependability performance is the key goal of durability-based maintenance is. The cement plant consists of 7 important steps, so, developing a maintenance plan based on Reliability centric maintenance (RCM) method is made up of 10 steps accordingly starting from selecting units and data until performing and updating the model. The processing unit chosen for the analysis of this case is the calcinatory unit regarding model’s validation and the Travancore Titanium Products Ltd (TTP) using the claimed data history acquired from the maintenance department maintenance from the mentioned company. After applying the proposed model, the results of the maintenance simulation justified the plant's existing scheduled maintenance policy being reconsidered. Results represent the need for preventive maintenance for all Class A criticality equipment instead of the planned maintenance and the breakdown one for all other equipment depends on its criticality and an FMEA report. Consequently, the additional cost of preventive maintenance would be offset by the cost savings from breakdown maintenance for the remaining equipment.

Keywords: engineering, reliability, strategy, maintenance, failure modes, effects and criticality analysis (FMEA)

Procedia PDF Downloads 160
1526 A BERT-Based Model for Financial Social Media Sentiment Analysis

Authors: Josiel Delgadillo, Johnson Kinyua, Charles Mutigwe

Abstract:

The purpose of sentiment analysis is to determine the sentiment strength (e.g., positive, negative, neutral) from a textual source for good decision-making. Natural language processing in domains such as financial markets requires knowledge of domain ontology, and pre-trained language models, such as BERT, have made significant breakthroughs in various NLP tasks by training on large-scale un-labeled generic corpora such as Wikipedia. However, sentiment analysis is a strong domain-dependent task. The rapid growth of social media has given users a platform to share their experiences and views about products, services, and processes, including financial markets. StockTwits and Twitter are social networks that allow the public to express their sentiments in real time. Hence, leveraging the success of unsupervised pre-training and a large amount of financial text available on social media platforms could potentially benefit a wide range of financial applications. This work is focused on sentiment analysis using social media text on platforms such as StockTwits and Twitter. To meet this need, SkyBERT, a domain-specific language model pre-trained and fine-tuned on financial corpora, has been developed. The results show that SkyBERT outperforms current state-of-the-art models in financial sentiment analysis. Extensive experimental results demonstrate the effectiveness and robustness of SkyBERT.

Keywords: BERT, financial markets, Twitter, sentiment analysis

Procedia PDF Downloads 144
1525 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 206
1524 Smart Disassembly of Waste Printed Circuit Boards: The Role of IoT and Edge Computing

Authors: Muhammad Mohsin, Fawad Ahmad, Fatima Batool, Muhammad Kaab Zarrar

Abstract:

The integration of the Internet of Things (IoT) and edge computing devices offers a transformative approach to electronic waste management, particularly in the dismantling of printed circuit boards (PCBs). This paper explores how these technologies optimize operational efficiency and improve environmental sustainability by addressing challenges such as data security, interoperability, scalability, and real-time data processing. Proposed solutions include advanced machine learning algorithms for predictive maintenance, robust encryption protocols, and scalable architectures that incorporate edge computing. Case studies from leading e-waste management facilities illustrate benefits such as improved material recovery efficiency, reduced environmental impact, improved worker safety, and optimized resource utilization. The findings highlight the potential of IoT and edge computing to revolutionize e-waste dismantling and make the case for a collaborative approach between policymakers, waste management professionals, and technology developers. This research provides important insights into the use of IoT and edge computing to make significant progress in the sustainable management of electronic waste

Keywords: internet of Things, edge computing, waste PCB disassembly, electronic waste management, data security, interoperability, machine learning, predictive maintenance, sustainable development

Procedia PDF Downloads 14
1523 Deep Reinforcement Learning for Advanced Pressure Management in Water Distribution Networks

Authors: Ahmed Negm, George Aggidis, Xiandong Ma

Abstract:

With the diverse nature of urban cities, customer demand patterns, landscape topologies or even seasonal weather trends; managing our water distribution networks (WDNs) has proved a complex task. These unpredictable circumstances manifest as pipe failures, intermittent supply and burst events thus adding to water loss, energy waste and increased carbon emissions. Whilst these events are unavoidable, advanced pressure management has proved an effective tool to control and mitigate them. Henceforth, water utilities have struggled with developing a real-time control method that is resilient when confronting the challenges of water distribution. In this paper we use deep reinforcement learning (DRL) algorithms as a novel pressure control strategy to minimise pressure violations and leakage under both burst and background leakage conditions. Agents based on asynchronous actor critic (A2C) and recurrent proximal policy optimisation (Recurrent PPO) were trained and compared to benchmarked optimisation algorithms (differential evolution, particle swarm optimisation. A2C manages to minimise leakage by 32.48% under burst conditions and 67.17% under background conditions which was the highest performance in the DRL algorithms. A2C and Recurrent PPO performed well in comparison to the benchmarks with higher processing speed and lower computational effort.

Keywords: deep reinforcement learning, pressure management, water distribution networks, leakage management

Procedia PDF Downloads 74
1522 The Intersection/Union Region Computation for Drosophila Brain Images Using Encoding Schemes Based on Multi-Core CPUs

Authors: Ming-Yang Guo, Cheng-Xian Wu, Wei-Xiang Chen, Chun-Yuan Lin, Yen-Jen Lin, Ann-Shyn Chiang

Abstract:

With more and more Drosophila Driver and Neuron images, it is an important work to find the similarity relationships among them as the functional inference. There is a general problem that how to find a Drosophila Driver image, which can cover a set of Drosophila Driver/Neuron images. In order to solve this problem, the intersection/union region for a set of images should be computed at first, then a comparison work is used to calculate the similarities between the region and other images. In this paper, three encoding schemes, namely Integer, Boolean, Decimal, are proposed to encode each image as a one-dimensional structure. Then, the intersection/union region from these images can be computed by using the compare operations, Boolean operators and lookup table method. Finally, the comparison work is done as the union region computation, and the similarity score can be calculated by the definition of Tanimoto coefficient. The above methods for the region computation are also implemented in the multi-core CPUs environment with the OpenMP. From the experimental results, in the encoding phase, the performance by the Boolean scheme is the best than that by others; in the region computation phase, the performance by Decimal is the best when the number of images is large. The speedup ratio can achieve 12 based on 16 CPUs. This work was supported by the Ministry of Science and Technology under the grant MOST 106-2221-E-182-070.

Keywords: Drosophila driver image, Drosophila neuron images, intersection/union computation, parallel processing, OpenMP

Procedia PDF Downloads 229
1521 Cooperative Learning: A Case Study on Teamwork through Community Service Project

Authors: Priyadharshini Ahrumugam

Abstract:

Cooperative groups through much research have been recognized to churn remarkable achievements instead of solitary or individualistic efforts. Based on Johnson and Johnson’s model of cooperative learning, the five key components of cooperation are positive interdependence, face-to-face promotive interaction, individual accountability, social skills and group processing. In 2011, the Malaysian Ministry of Higher Education (MOHE) introduced the Holistic Student Development policy with the aim to develop morally sound individuals equipped with lifelong learning skills. The Community Service project was included in the improvement initiative. The purpose of this study is to assess the relationship of team-based learning in facilitating particularly students’ positive interdependence and face-to-face promotive interaction. The research methods involve in-depth interviews with the team leaders and selected team members, and a content analysis of the undergraduate students’ reflective journals. A significant positive relationship was found between students’ progressive outlook towards teamwork and the highlighted two components. The key findings show that students have gained in their individual learning and work results through teamwork and interaction with other students. The inclusion of Community Service as a MOHE subject resonates with cooperative learning methods that enhances supportive relationships and develops students’ social skills together with their professional skills.

Keywords: community service, cooperative learning, positive interdependence, teamwork

Procedia PDF Downloads 301
1520 Emotional Awareness and Working Memory as Predictive Factors for the Habitual Use of Cognitive Reappraisal among Adolescents

Authors: Yuri Kitahara

Abstract:

Background: Cognitive reappraisal refers to an emotion regulation strategy in which one changes the interpretation of emotion-eliciting events. Numerous studies show that cognitive reappraisal is associated with mental health and better social functioning. However the examination of the predictive factors of adaptive emotion regulation remains as an issue. The present study examined the factors contributing to the habitual use of cognitive reappraisal, with a focus on emotional awareness and working memory. Methods: Data was collected from 30 junior high school students, using a Japanese version of the Emotion Regulation Questionnaire (ERQ), the Levels of Emotional Awareness Scale for Children (LEAS-C), and N-back task. Results: A positive correlation between emotional awareness and cognitive reappraisal was observed in the high-working-memory group (r = .54, p < .05), whereas no significant relationship was found in the low-working-memory group. In addition, the results of the analysis of variance (ANOVA) showed a significant interaction between emotional awareness and working memory capacity (F(1, 26) = 7.74, p < .05). Subsequent analysis of simple main effects confirmed that high working memory capacity significantly increases the use of cognitive reappraisal for high-emotional-awareness subjects, and significantly decreases the use of cognitive reappraisal for low-emotional-awareness subjects. Discussion: These results indicate that under the condition when one has an adequate ability for simultaneous processing of information, explicit understanding of emotion would contribute to adaptive cognitive emotion regulation. The findings are discussed along with neuroscientific claims.

Keywords: cognitive reappraisal, emotional awareness, emotion regulation, working memory

Procedia PDF Downloads 219
1519 Full-Field Estimation of Cyclic Threshold Shear Strain

Authors: E. E. S. Uy, T. Noda, K. Nakai, J. R. Dungca

Abstract:

Cyclic threshold shear strain is the cyclic shear strain amplitude that serves as the indicator of the development of pore water pressure. The parameter can be obtained by performing either cyclic triaxial test, shaking table test, cyclic simple shear or resonant column. In a cyclic triaxial test, other researchers install measuring devices in close proximity of the soil to measure the parameter. In this study, an attempt was made to estimate the cyclic threshold shear strain parameter using full-field measurement technique. The technique uses a camera to monitor and measure the movement of the soil. For this study, the technique was incorporated in a strain-controlled consolidated undrained cyclic triaxial test. Calibration of the camera was first performed to ensure that the camera can properly measure the deformation under cyclic loading. Its capacity to measure deformation was also investigated using a cylindrical rubber dummy. Two-dimensional image processing was implemented. Lucas and Kanade optical flow algorithm was applied to track the movement of the soil particles. Results from the full-field measurement technique were compared with the results from the linear variable displacement transducer. A range of values was determined from the estimation. This was due to the nonhomogeneous deformation of the soil observed during the cyclic loading. The minimum values were in the order of 10-2% in some areas of the specimen.

Keywords: cyclic loading, cyclic threshold shear strain, full-field measurement, optical flow

Procedia PDF Downloads 227
1518 Extracellular Phytase from Lactobacillus fermentum spp KA1: Optimization of Enzyme Production and Its Application for Improving the Nutritional Quality of Rice Bran

Authors: Neha Sharma, Kanthi K. Kondepudi, Naveen Gupta

Abstract:

Phytases are phytate specific phosphatases catalyzing the step-wise dephosphorylation of phytate, which acts as an anti-nutritional factor in food due to its strong binding capacity to minerals. In recent years microbial phytases have been explored for improving nutritional quality of food. But the major limitation is acceptability of phytases from these microorganisms. Therefore, efforts are being made to isolate organisms which are generally regarded as safe for human consumption such as Lactic Acid Bacteria (LAB). Phytases from these organisms will have an edge over other phytase sources due to its probiotic attributes. Only few LAB have been reported to give phytase activity that too is generally seen as intracellular. LAB producing extracellular phytase will be more useful as it can degrade phytate more effectively. Moreover, enzyme from such isolate will have application in food processing also. Only few species of Lactobacillus producing extracellular phytase have been reported so far. This study reports the isolation of a probiotic strain of Lactobacillus fermentum spp KA1 which produces extracellular phytase. Conditions for the optimal production of phytase have been optimized and the enzyme production resulted in an approximately 13-fold increase in yield. The phytate degradation potential of extracellular phytase in rice bran has been explored and conditions for optimal degradation were optimized. Under optimal conditions, there was 43.26% release of inorganic phosphate and 6.45% decrease of phytate content.

Keywords: Lactobacillus, phytase, phytate reduction, rice bran

Procedia PDF Downloads 183
1517 Information Retrieval from Internet Using Hand Gestures

Authors: Aniket S. Joshi, Aditya R. Mane, Arjun Tukaram

Abstract:

In the 21st century, in the era of e-world, people are continuously getting updated by daily information such as weather conditions, news, stock exchange market updates, new projects, cricket updates, sports and other such applications. In the busy situation, they want this information on the little use of keyboard, time. Today in order to get such information user have to repeat same mouse and keyboard actions which includes time and inconvenience. In India due to rural background many people are not much familiar about the use of computer and internet also. Also in small clinics, small offices, and hotels and in the airport there should be a system which retrieves daily information with the minimum use of keyboard and mouse actions. We plan to design application based project that can easily retrieve information with minimum use of keyboard and mouse actions and make our task more convenient and easier. This can be possible with an image processing application which takes real time hand gestures which will get matched by system and retrieve information. Once selected the functions with hand gestures, the system will report action information to user. In this project we use real time hand gesture movements to select required option which is stored on the screen in the form of RSS Feeds. Gesture will select the required option and the information will be popped and we got the information. A real time hand gesture makes the application handier and easier to use.

Keywords: hand detection, hand tracking, hand gesture recognition, HSV color model, Blob detection

Procedia PDF Downloads 277
1516 Development of Mineral Carbonation Process from Ultramafic Tailings, Enhancing the Reactivity of Feedstocks

Authors: Sara Gardideh, Mansoor Barati

Abstract:

The mineral carbonation approach for reducing global warming has garnered interest on a worldwide scale. Due to the benefits of permanent storage and abundant mineral resources, mineral carbonation (MC) is one of the most effective strategies for sequestering CO₂. The combination of mineral processing for primary metal recovery and mineral carbonation for carbon sequestration is an emerging field of study with the potential to minimize capital costs. A detailed study of low-pressures–solid carbonation of ultramafic tailings in a dry environment has been accomplished. In order to track the changing structure of serpentine minerals and their reactivity as a function of temperature (300-900 ᵒC), CO₂ partial pressure (25-90 mol %), and thermal preconditioning, thermogravimetry has been utilized. The incongruent CO₂ van der Waals molecular diameters with the octahedral-tetrahedral lattice constants of serpentine were used to explain the mild carbonation reactivity. Serpentine requires additional thermal-treatment to remove hydroxyl groups, resulting in the chemical transformation to pseudo-forsterite, which is a mineral composed of isolated SiO₄ tetrahedra linked by octahedrally coordinated magnesium ions. The heating treatment above 850 ᵒC is adequate to remove chemically bound water from the lattice. Particles with a diameter < 34 (μm) are desirable, and thermally treated serpentine at 850 ᵒC for 2.30 hours reached 65% CO₂ storage capacity. The decrease in particle size, increase in temperature, and magnetic separation can dramatically enhance carbonation.

Keywords: particle size, thermogravimetry, thermal-treatment, serpentine

Procedia PDF Downloads 78
1515 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 230
1514 Application of GPRS in Water Quality Monitoring System

Authors: V. Ayishwarya Bharathi, S. M. Hasker, J. Indhu, M. Mohamed Azarudeen, G. Gowthami, R. Vinoth Rajan, N. Vijayarangan

Abstract:

Identification of water quality conditions in a river system based on limited observations is an essential task for meeting the goals of environmental management. The traditional method of water quality testing is to collect samples manually and then send to laboratory for analysis. However, it has been unable to meet the demands of water quality monitoring today. So a set of automatic measurement and reporting system of water quality has been developed. In this project specifies Water quality parameters collected by multi-parameter water quality probe are transmitted to data processing and monitoring center through GPRS wireless communication network of mobile. The multi parameter sensor is directly placed above the water level. The monitoring center consists of GPRS and micro-controller which monitor the data. The collected data can be monitor at any instant of time. In the pollution control board they will monitor the water quality sensor data in computer using Visual Basic Software. The system collects, transmits and processes water quality parameters automatically, so production efficiency and economy benefit are improved greatly. GPRS technology can achieve well within the complex environment of poor water quality non-monitored, and more specifically applicable to the collection point, data transmission automatically generate the field of water analysis equipment data transmission and monitoring.

Keywords: multiparameter sensor, GPRS, visual basic software, RS232

Procedia PDF Downloads 398
1513 Predicting Response to Cognitive Behavioral Therapy for Psychosis Using Machine Learning and Functional Magnetic Resonance Imaging

Authors: Eva Tolmeijer, Emmanuelle Peters, Veena Kumari, Liam Mason

Abstract:

Cognitive behavioral therapy for psychosis (CBTp) is effective in many but not all patients, making it important to better understand the factors that determine treatment outcomes. To date, no studies have examined whether neuroimaging can make clinically useful predictions about who will respond to CBTp. To this end, we used machine learning methods that make predictions about symptom improvement at the individual patient level. Prior to receiving CBTp, 22 patients with a diagnosis of schizophrenia completed a social-affective processing task during functional MRI. Multivariate pattern analysis assessed whether treatment response could be predicted by brain activation responses to facial affect that was either socially threatening or prosocial. The resulting models did significantly predict symptom improvement, with distinct multivariate signatures predicting psychotic (r=0.54, p=0.01) and affective (r=0.32, p=0.05) symptoms. Psychotic symptom improvement was accurately predicted from relatively focal threat-related activation across hippocampal, occipital, and temporal regions; affective symptom improvement was predicted by a more dispersed profile of responses to prosocial affect. These findings enrich our understanding of the neurobiological underpinning of treatment response. This study provides a foundation that will hopefully lead to greater precision and tailoring of the interventions offered to patients.

Keywords: cognitive behavioral therapy, machine learning, psychosis, schizophrenia

Procedia PDF Downloads 270
1512 Economics of Precision Mechanization in Wine and Table Grape Production

Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka

Abstract:

The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.

Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes

Procedia PDF Downloads 251
1511 Perceiving Casual Speech: A Gating Experiment with French Listeners of L2 English

Authors: Naouel Zoghlami

Abstract:

Spoken-word recognition involves the simultaneous activation of potential word candidates which compete with each other for final correct recognition. In continuous speech, the activation-competition process gets more complicated due to speech reductions existing at word boundaries. Lexical processing is more difficult in L2 than in L1 because L2 listeners often lack phonetic, lexico-semantic, syntactic, and prosodic knowledge in the target language. In this study, we investigate the on-line lexical segmentation hypotheses that French listeners of L2 English form and then revise as subsequent perceptual evidence is revealed. Our purpose is to shed further light on the processes of L2 spoken-word recognition in context and better understand L2 listening difficulties through a comparison of skilled and unskilled reactions at the point where their working hypothesis is rejected. We use a variant of the gating experiment in which subjects transcribe an English sentence presented in increments of progressively greater duration. The spoken sentence was “And this amazing athlete has just broken another world record”, chosen mainly because it included common reductions and phonetic features in English, such as elision and assimilation. Our preliminary results show that there is an important difference in the manner in which proficient and less-proficient L2 listeners handle connected speech. Less-proficient listeners delay recognition of words as they wait for lexical and syntactic evidence to appear in the gates. Further statistical results are currently being undertaken.

Keywords: gating paradigm, spoken word recognition, online lexical segmentation, L2 listening

Procedia PDF Downloads 456
1510 Denoising Transient Electromagnetic Data

Authors: Lingerew Nebere Kassie, Ping-Yu Chang, Hsin-Hua Huang, , Chaw-Son Chen

Abstract:

Transient electromagnetic (TEM) data plays a crucial role in hydrogeological and environmental applications, providing valuable insights into geological structures and resistivity variations. However, the presence of noise often hinders the interpretation and reliability of these data. Our study addresses this issue by utilizing a FASTSNAP system for the TEM survey, which operates at different modes (low, medium, and high) with continuous adjustments to discretization, gain, and current. We employ a denoising approach that processes the raw data obtained from each acquisition mode to improve signal quality and enhance data reliability. We use a signal-averaging technique for each mode, increasing the signal-to-noise ratio. Additionally, we utilize wavelet transform to suppress noise further while preserving the integrity of the underlying signals. This approach significantly improves the data quality, notably suppressing severe noise at late times. The resulting denoised data exhibits a substantially improved signal-to-noise ratio, leading to increased accuracy in parameter estimation. By effectively denoising TEM data, our study contributes to a more reliable interpretation and analysis of underground structures. Moreover, the proposed denoising approach can be seamlessly integrated into existing ground-based TEM data processing workflows, facilitating the extraction of meaningful information from noisy measurements and enhancing the overall quality and reliability of the acquired data.

Keywords: data quality, signal averaging, transient electromagnetic, wavelet transform

Procedia PDF Downloads 78
1509 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 216
1508 The Internet of Things: A Survey of Authentication Mechanisms, and Protocols, for the Shifting Paradigm of Communicating, Entities

Authors: Nazli Hardy

Abstract:

Multidisciplinary application of computer science, interactive database-driven web application, the Internet of Things (IoT) represents a digital ecosystem that has pervasive technological, social, and economic, impact on the human population. It is a long-term technology, and its development is built around the connection of everyday objects, to the Internet. It is estimated that by 2020, with billions of people connected to the Internet, the number of connected devices will exceed 50 billion, and thus IoT represents a paradigm shift in in our current interconnected ecosystem, a communication shift that will unavoidably affect people, businesses, consumers, clients, employees. By nature, in order to provide a cohesive and integrated service, connected devices need to collect, aggregate, store, mine, process personal and personalized data on individuals and corporations in a variety of contexts and environments. A significant factor in this paradigm shift is the necessity for secure and appropriate transmission, processing and storage of the data. Thus, while benefits of the applications appear to be boundless, these same opportunities are bounded by concerns such as trust, privacy, security, loss of control, and related issues. This poster and presentation look at a multi-factor authentication (MFA) mechanisms that need to change from the login-password tuple to an Identity and Access Management (IAM) model, to the more cohesive to Identity Relationship Management (IRM) standard. It also compares and contrasts messaging protocols that are appropriate for the IoT ecosystem.

Keywords: Internet of Things (IoT), authentication, protocols, survey

Procedia PDF Downloads 291