Search results for: green network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6661

Search results for: green network

1111 Supplier Selection Using Sustainable Criteria in Sustainable Supply Chain Management

Authors: Richa Grover, Rahul Grover, V. Balaji Rao, Kavish Kejriwal

Abstract:

Selection of suppliers is a crucial problem in the supply chain management. On top of that, sustainable supplier selection is the biggest challenge for the organizations. Environment protection and social problems have been of concern to society in recent years, and the traditional supplier selection does not consider about this factor; therefore, this research work focuses on introducing sustainable criteria into the structure of supplier selection criteria. Sustainable Supply Chain Management (SSCM) is the management and administration of material, information, and money flows, as well as coordination among business along the supply chain. All three dimensions - economic, environmental, and social - of sustainable development needs to be taken care of. Purpose of this research is to maximize supply chain profitability, maximize social wellbeing of supply chain and minimize environmental impacts. Problem statement is selection of suppliers in a sustainable supply chain network by ranking the suppliers against sustainable criteria identified. The aim of this research is twofold: To find out what are the sustainable parameters that can be applied to the supply chain, and to determine how these parameters can effectively be used in supplier selection. Multicriteria decision making tools will be used to rank both criteria and suppliers. AHP Analysis will be used to find out ratings for the criteria identified. It is a technique used for efficient decision making. TOPSIS will be used to find out rating for suppliers and then ranking them. TOPSIS is a MCDM problem solving method which is based on the principle that the chosen option should have the maximum distance from the negative ideal solution (NIS) and the minimum distance from the ideal solution.

Keywords: sustainable supply chain management, sustainable criteria, MCDM tools, AHP analysis, TOPSIS method

Procedia PDF Downloads 321
1110 A Kunitz-Type Serine Protease Inhibitor from Rock Bream, Oplegnathus fasciatus Involved in Immune Responses

Authors: S. D. N. K. Bathige, G. I. Godahewa, Navaneethaiyer Umasuthan, Jehee Lee

Abstract:

Kunitz-type serine protease inhibitors (KTIs) are identified in various organisms including animals, plants and microbes. These proteins shared single or multiple Kunitz inhibitory domains link together or associated with other types of domains. Characteristic Kunitz type domain composed of around 60 amino acid residues with six conserved cysteine residues to stabilize by three disulfide bridges. KTIs are involved in various physiological processes, such as ion channel blocking, blood coagulation, fibrinolysis and inflammation. In this study, two Kunitz-type domain containing protein was identified from rock bream database and designated as RbKunitz. The coding sequence of RbKunitz encoded for 507 amino acids with 56.2 kDa theoretical molecular mass and 5.7 isoelectric point (pI). There are several functional domains including MANEC superfamily domain, PKD superfamily domain, and LDLa domain were predicted in addition to the two characteristic Kunitz domain. Moreover, trypsin interaction sites were also identified in Kunitz domain. Homology analysis revealed that RbKunitz shared highest identity (77.6%) with Takifugu rubripes. Completely conserved 28 cysteine residues were recognized, when comparison of RbKunitz with other orthologs from different taxonomical groups. These structural evidences indicate the rigidity of RbKunitz folding structure to achieve the proper function. The phylogenetic tree was constructed using neighbor-joining method and exhibited that the KTIs from fish and non-fish has been evolved in separately. Rock bream was clustered with Takifugu rubripes. The SYBR Green qPCR was performed to quantify the RbKunitz transcripts in different tissues and challenged tissues. The mRNA transcripts of RbKunitz were detected in all tissues (muscle, spleen, head kidney, blood, heart, skin, liver, intestine, kidney and gills) analyzed and highest transcripts level was detected in gill tissues. Temporal transcription profile of RbKunitz in rock bream blood tissues was analyzed upon LPS (lipopolysaccharide), Poly I:C (Polyinosinic:polycytidylic acid) and Edwardsiella tarda challenge to understand the immune responses of this gene. Compare to the unchallenged control RbKunitz exhibited strong up-regulation at 24 h post injection (p.i.) after LPS and E. tarda injection. Comparatively robust expression of RbKunits was observed at 3 h p.i. upon Poly I:C challenge. Taken together all these data indicate that RbKunitz may involve into to immune responses upon pathogenic stress, in order to protect the rock bream.

Keywords: Kunitz-type, rock bream, immune response, serine protease inhibitor

Procedia PDF Downloads 375
1109 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization

Procedia PDF Downloads 366
1108 DWDM Network Implementation in the Honduran Telecommunications Company "Hondutel"

Authors: Tannia Vindel, Carlos Mejia, Damaris Araujo, Carlos Velasquez, Darlin Trejo

Abstract:

The DWDM (Dense Wavelenght Division Multiplexing) is in constant growth around the world by consumer demand to meet their needs. Since its inception in this operation arises the need for a system which enable us to expand the communication of an entire nation to improve the computing trends of their societies according to their customs and geographical location. The Honduran Company of Telecommunications (HONDUTEL), provides the internet services and data transport technology with a PDH and SDH, which represents in the Republic of Honduras C. A., the option of viability for the consumer in terms of purchase value and its ease of acquisition; but does not have the efficiency in terms of technological advance and represents an obstacle that limits the long-term socio-economic development in comparison with other countries in the region and to be able to establish a competition between telecommunications companies that are engaged in this heading. For that reason we propose to establish a new technological trend implemented in Europe and that is applied in our country that allows us to provide a data transfer in broadband as it is DWDM, in this way we will have a stable service and quality that will allow us to compete in this globalized world, and that must be replaced by one that would provide a better service and which must be in the forefront. Once implemented the DWDM is build upon the existing resources, such as the equipment used, and you will be given life to a new stage providing a business image to the Republic of Honduras C,A, as a nation, to ensure the data transport and broadband internet to a meaningful relationship. Same benefits in the first instance to existing customers and to all the institutions were bidden to these public and private need of such services.

Keywords: demultiplexers, light detectors, multiplexers, optical amplifiers, optical fibers, PDH, SDH

Procedia PDF Downloads 259
1107 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection

Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada

Abstract:

With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.

Keywords: machine learning, imbalanced data, data mining, big data

Procedia PDF Downloads 129
1106 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 310
1105 Water Infrastructure Asset Management: A Comparative Analysis of Three Urban Water Utilities in South Africa

Authors: Elkington S. Mnguni

Abstract:

Water and sanitation services in South Africa are characterized by both achievements and challenges. After the end of apartheid in 1994 the newly elected government faced the challenge of eradicating backlogs with respect to access to basic services, including water and sanitation. Capital investment made in the development of new water and sanitation infrastructure to provide basic services to previously disadvantaged communities has grown, to a certain extent, at the expense of investment in the operation and maintenance of new and existing infrastructure. Challenges resulting from aging infrastructure and poor plant performance highlight the need for investing in the maintenance, rehabilitation, and replacement of existing infrastructure to optimize the return on investment. Advanced water infrastructure asset management (IAM) is key to achieving adequate levels of service, particularly with regard to reliable and high quality drinking water supply, prevention of urban flooding, efficient use of natural resources and prevention of pollution and associated risks. Against this backdrop, this paper presents an appraisal of water and sanitation IAM systems in South Africa’s three utilities, being metropolitan cities in the Gauteng Province. About a quarter of the national population lives in the three rapidly urbanizing cities of Johannesburg, Ekurhuleni and Tshwane, located in a semi-arid region. A literature review has been done and field visits to some of the utility facilities are being conducted. Semi-structured interviews will be conducted with the three utilities. The following critical factors are being analysed in terms of compliance with the national Water Services IAM Strategy (2011) and other applicable legislation: asset registers; capacity of assets; current and predicted demand; funding availability / budget allocations; plans: operation & maintenance, renewal & replacement, and risk management; no-drop status (non-revenue water levels); blue drop status (water quality); green drop status (effluent quality); and skills availability. Some of the key challenges identified in the literature review include: funding constraints, Skills shortage, and wastewater treatment plants operating beyond their design capacities. These challenges will be verified during field visits and research interviews. Gaps between literature and practice will be identified and relevant recommendations made if necessary. The objective of this study is to contribute to the resolution of the challenges brought about by the backlogs in the operation and maintenance of water and sanitation assets in the country in general, and in the three cities in particular, thus improving the sustainability thereof.

Keywords: asset management, backlogs, levels of service, sustainability, water and sanitation infrastructure

Procedia PDF Downloads 222
1104 Toxicity Evaluation of Reduced Graphene Oxide on First Larval Stages of Artemia sp.

Authors: Roberta Pecoraro

Abstract:

The focus of this work was to investigate the potential toxic effect of titanium dioxide-reduced graphene oxide (TiO₂-rGO) nanocomposites on nauplii of microcrustacean Artemia sp. In order to assess the nanocomposite’s toxicity, a short-term test was performed by exposing nauplii to solutions containing TiO₂-rGO. To prepare titanium dioxide-reduced graphene oxide (TiO₂-rGO) nanocomposites, a green procedure based on solar photoreduction was proposed; it allows to obtain the photocatalysts by exploiting the photocatalytic properties of titania activated by the solar irradiation in order to avoid the high temperatures and pressures required for the standard hydrothermal synthesis. Powders of TiO₂-rGO supplied by the Department of Chemical Sciences (University of Catania) are indicated as TiO₂-rGO at 1% and TiO₂-rGO at 2%. Starting from a stock solution (1mg rGO-TiO₂/10 ml ASPM water) of each type, we tested four different concentrations (serial dilutions ranging from 10⁻¹ to 10⁻⁴ mg/ml). All the solutions have been sonicated for 12 min prior to use. Artificial seawater (called ASPM water) was prepared to guarantee the hatching of the cysts and to maintain nauplii; the durable cysts used in this study, marketed by JBL (JBL GmbH & Co. KG, Germany), were hydrated with ASPM water to obtain nauplii (instar II-III larvae). The hatching of the cysts was carried out in the laboratory by immersing them in ASPM water inside a 500 ml beaker and keeping them constantly oxygenated thanks to an aerator for the insufflation of microbubble air: after 24-48 hours, the cysts hatched, and the nauplii appeared. The nauplii in the second and third stages of development were collected one-to-one, using stereomicroscopes, and transferred into 96-well microplates where one nauplius per well was added. The wells quickly have been filled with 300 µl of each specific concentration of the solution used, and control samples were incubated only with ASPM water. Replication was performed for each concentration. Finally, the microplates were placed on an orbital shaker, and the tests were read after 24 and 48 hours from inoculating the solutions to assess the endpoint (immobility/death) for the larvae. Nauplii that appeared motionless were counted as dead, and the percentages of mortality were calculated for each treatment. The results showed a low percentage of immobilization both for TiO₂-rGO at 1% and TiO₂-rGO at 2% for all concentrations tested: for TiO₂-rGO at 1% was below 12% after 24h and below 15% after 48h; for TiO₂-rGO at 2% was below 8% after 24h and below 12% after 48h. According to other studies in the literature, the results have not shown mortality nor toxic effects on the development of larvae after exposure to rGO. Finally, it is important to highlight that the TiO₂-rGO catalysts were tested in the solar photodegradation of a toxic herbicide (2,4-Dichlorophenoxyacetic acid, 2,4-D), obtaining a high percentage of degradation; therefore, this alternative approach could be considered a good strategy to obtain performing photocatalysts.

Keywords: Nauplii, photocatalytic properties, reduced GO, short-term toxicity test, titanium dioxide

Procedia PDF Downloads 179
1103 Surface Modified Quantum Dots for Nanophotonics, Stereolithography and Hybrid Systems for Biomedical Studies

Authors: Redouane Krini, Lutz Nuhn, Hicham El Mard Cheol Woo Ha, Yoondeok Han, Kwang-Sup Lee, Dong-Yol Yang, Jinsoo Joo, Rudolf Zentel

Abstract:

To use Quantum Dots (QDs) in the two photon initiated polymerization technique (TPIP) for 3D patternings, QDs were modified on the surface with photosensitive end groups which are able to undergo a photopolymerization. We were able to fabricate fluorescent 3D lattice structures using photopatternable QDs by TPIP for photonic devices such as photonic crystals and metamaterials. The QDs in different diameter have different emission colors and through mixing of RGB QDs white light fluorescent from the polymeric structures has been created. Metamaterials are capable for unique interaction with the electrical and magnetic components of the electromagnetic radiation and for manipulating light it is crucial to have a negative refractive index. In combination with QDs via TPIP technique polymeric structures can be designed with properties which cannot be found in nature. This makes these artificial materials gaining a huge importance for real-life applications in photonic and optoelectronic. Understanding of interactions between nanoparticles and biological systems is of a huge interest in the biomedical research field. We developed a synthetic strategy of polymer functionalized nanoparticles for biomedical studies to obtain hybrid systems of QDs and copolymers with a strong binding network in an inner shell and which can be modified in the end through their poly(ethylene glycol) functionalized outer shell. These hybrid systems can be used as models for investigation of cell penetration and drug delivery by using measurements combination between CryoTEM and fluorescence studies.

Keywords: biomedical study models, lithography, photo induced polymerization, quantum dots

Procedia PDF Downloads 520
1102 An Intelligent Transportation System for Safety and Integrated Management of Railway Crossings

Authors: M. Magrini, D. Moroni, G. Palazzese, G. Pieri, D. Azzarelli, A. Spada, L. Fanucci, O. Salvetti

Abstract:

Railway crossings are complex entities whose optimal management cannot be addressed unless with the help of an intelligent transportation system integrating information both on train and vehicular flows. In this paper, we propose an integrated system named SIMPLE (Railway Safety and Infrastructure for Mobility applied at level crossings) that, while providing unparalleled safety in railway level crossings, collects data on rail and road traffic and provides value-added services to citizens and commuters. Such services include for example alerts, via variable message signs to drivers and suggestions for alternative routes, towards a more sustainable, eco-friendly and efficient urban mobility. To achieve these goals, SIMPLE is organized as a System of Systems (SoS), with a modular architecture whose components range from specially-designed radar sensors for obstacle detection to smart ETSI M2M-compliant camera networks for urban traffic monitoring. Computational unit for performing forecast according to adaptive models of train and vehicular traffic are also included. The proposed system has been tested and validated during an extensive trial held in the mid-sized Italian town of Montecatini, a paradigmatic case where the rail network is inextricably linked with the fabric of the city. Results of the tests are reported and discussed.

Keywords: Intelligent Transportation Systems (ITS), railway, railroad crossing, smart camera networks, radar obstacle detection, real-time traffic optimization, IoT, ETSI M2M, transport safety

Procedia PDF Downloads 495
1101 Extraction of Cellulose Nanofibrils from Pulp Using Enzymatic Pretreatment and Evaluation of Their Papermaking Potential

Authors: Ajay Kumar Singh, Arvind Kumar, S. P. Singh

Abstract:

Cellulose nanofibrils (CNF) have shown potential of their extensive use in various fields, including papermaking, due to their unique characteristics. In this study, CNF’s were prepared by fibrillating the pulp obtained from raw materials e.g. bagasse, hardwood and softwood using enzymatic pretreatment followed by mechanical refining. These nanofibrils, when examined under FE-SEM, show that partial fibrillation on fiber surface has resulted in production of nanofibers. Mixing these nanofibers with the unrefined and normally refined fibers show their reinforcing effect. This effect is manifested in observing the improvement in the physical and mechanical properties e.g. tensile index and burst index of paper. Tear index, however, was observed to decrease on blending with nanofibers. The optical properties of paper sheets made from blended fibers showed no significant change in comparison to those made from only mechanically refined pulp. Mixing of normal pulp fibers with nanofibers show increase in ºSR and consequent decrease in drainage rate. These changes observed in mechanical, optical and other physical properties of the paper sheets made from nanofibrils blended pulp have been tried to explain considering the distribution of the nanofibrils alongside microfibrils in the fibrous network. Since usually, paper/boards with higher strength are observed to have diminished optical properties which is a drawback in their quality, the present work has the potential for developing paper/boards having improved strength alongwith undiminished optical properties utilising the concepts of nanoscience and nanotechnology.

Keywords: enzymatic pretreatment, mechanical refining, nanofibrils, paper properties

Procedia PDF Downloads 350
1100 Time Series Analysis the Case of China and USA Trade Examining during Covid-19 Trade Enormity of Abnormal Pricing with the Exchange rate

Authors: Md. Mahadi Hasan Sany, Mumenunnessa Keya, Sharun Khushbu, Sheikh Abujar

Abstract:

Since the beginning of China's economic reform, trade between the U.S. and China has grown rapidly, and has increased since China's accession to the World Trade Organization in 2001. The US imports more than it exports from China, reducing the trade war between China and the U.S. for the 2019 trade deficit, but in 2020, the opposite happens. In international and U.S. trade, Washington launched a full-scale trade war against China in March 2016, which occurred a catastrophic epidemic. The main goal of our study is to measure and predict trade relations between China and the U.S., before and after the arrival of the COVID epidemic. The ML model uses different data as input but has no time dimension that is present in the time series models and is only able to predict the future from previously observed data. The LSTM (a well-known Recurrent Neural Network) model is applied as the best time series model for trading forecasting. We have been able to create a sustainable forecasting system in trade between China and the US by closely monitoring a dataset published by the State Website NZ Tatauranga Aotearoa from January 1, 2015, to April 30, 2021. Throughout the survey, we provided a 180-day forecast that outlined what would happen to trade between China and the US during COVID-19. In addition, we have illustrated that the LSTM model provides outstanding outcome in time series data analysis rather than RFR and SVR (e.g., both ML models). The study looks at how the current Covid outbreak affects China-US trade. As a comparative study, RMSE transmission rate is calculated for LSTM, RFR and SVR. From our time series analysis, it can be said that the LSTM model has given very favorable thoughts in terms of China-US trade on the future export situation.

Keywords: RFR, China-U.S. trade war, SVR, LSTM, deep learning, Covid-19, export value, forecasting, time series analysis

Procedia PDF Downloads 192
1099 Self-Organizing Maps for Credit Card Fraud Detection

Authors: ChunYi Peng, Wei Hsuan CHeng, Shyh Kuang Ueng

Abstract:

This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.

Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies

Procedia PDF Downloads 52
1098 A Quinary Coding and Matrix Structure Based Channel Hopping Algorithm for Blind Rendezvous in Cognitive Radio Networks

Authors: Qinglin Liu, Zhiyong Lin, Zongheng Wei, Jianfeng Wen, Congming Yi, Hai Liu

Abstract:

The multi-channel blind rendezvous problem in distributed cognitive radio networks (DCRNs) refers to how users in the network can hop to the same channel at the same time slot without any prior knowledge (i.e., each user is unaware of other users' information). The channel hopping (CH) technique is a typical solution to this blind rendezvous problem. In this paper, we propose a quinary coding and matrix structure-based CH algorithm called QCMS-CH. The QCMS-CH algorithm can guarantee the rendezvous of users using only one cognitive radio in the scenario of the asynchronous clock (i.e., arbitrary time drift between the users), heterogeneous channels (i.e., the available channel sets of users are distinct), and symmetric role (i.e., all users play a same role). The QCMS-CH algorithm first represents a randomly selected channel (denoted by R) as a fixed-length quaternary number. Then it encodes the quaternary number into a quinary bootstrapping sequence according to a carefully designed quaternary-quinary coding table with the prefix "R00". Finally, it builds a CH matrix column by column according to the bootstrapping sequence and six different types of elaborately generated subsequences. The user can access the CH matrix row by row and accordingly perform its channel, hoping to attempt rendezvous with other users. We prove the correctness of QCMS-CH and derive an upper bound on its Maximum Time-to-Rendezvous (MTTR). Simulation results show that the QCMS-CH algorithm outperforms the state-of-the-art in terms of the MTTR and the Expected Time-to-Rendezvous (ETTR).

Keywords: channel hopping, blind rendezvous, cognitive radio networks, quaternary-quinary coding

Procedia PDF Downloads 86
1097 Copper Price Prediction Model for Various Economic Situations

Authors: Haidy S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

Copper is an essential raw material used in the construction industry. During the year 2021 and the first half of 2022, the global market suffered from a significant fluctuation in copper raw material prices due to the aftermath of both the COVID-19 pandemic and the Russia-Ukraine war, which exposed its consumers to an unexpected financial risk. Thereto, this paper aims to develop two ANN-LSTM price prediction models, using Python, that can forecast the average monthly copper prices traded in the London Metal Exchange; the first model is a multivariate model that forecasts the copper price of the next 1-month and the second is a univariate model that predicts the copper prices of the upcoming three months. Historical data of average monthly London Metal Exchange copper prices are collected from January 2009 till July 2022, and potential external factors are identified and employed in the multivariate model. These factors lie under three main categories: energy prices and economic indicators of the three major exporting countries of copper, depending on the data availability. Before developing the LSTM models, the collected external parameters are analyzed with respect to the copper prices using correlation and multicollinearity tests in R software; then, the parameters are further screened to select the parameters that influence the copper prices. Then, the two LSTM models are developed, and the dataset is divided into training, validation, and testing sets. The results show that the performance of the 3-Month prediction model is better than the 1-Month prediction model, but still, both models can act as predicting tools for diverse economic situations.

Keywords: copper prices, prediction model, neural network, time series forecasting

Procedia PDF Downloads 108
1096 Maximizing Profit Using Optimal Control by Exploiting the Flexibility in Thermal Power Plants

Authors: Daud Mustafa Minhas, Raja Rehan Khalid, Georg Frey

Abstract:

The next generation power systems are equipped with abundantly available free renewable energy resources (RES). During their low-cost operations, the price of electricity significantly reduces to a lower value, and sometimes it becomes negative. Therefore, it is recommended not to operate the traditional power plants (e.g. coal power plants) and to reduce the losses. In fact, it is not a cost-effective solution, because these power plants exhibit some shutdown and startup costs. Moreover, they require certain time for shutdown and also need enough pause before starting up again, increasing inefficiency in the whole power network. Hence, there is always a trade-off between avoiding negative electricity prices, and the startup costs of power plants. To exploit this trade-off and to increase the profit of a power plant, two main contributions are made: 1) introducing retrofit technology for state of art coal power plant; 2) proposing optimal control strategy for a power plant by exploiting different flexibility features. These flexibility features include: improving ramp rate of power plant, reducing startup time and lowering minimum load. While, the control strategy is solved as mixed integer linear programming (MILP), ensuring optimal solution for the profit maximization problem. Extensive comparisons are made considering pre and post-retrofit coal power plant having the same efficiencies under different electricity price scenarios. It concludes that if the power plant must remain in the market (providing services), more flexibility reflects direct economic advantage to the plant operator.

Keywords: discrete optimization, power plant flexibility, profit maximization, unit commitment model

Procedia PDF Downloads 137
1095 Harmonic Distortion Analysis in Low Voltage Grid with Grid-Connected Photovoltaic

Authors: Hedi Dghim, Ahmed El-Naggar, Istvan Erlich

Abstract:

Power electronic converters are being introduced in low voltage (LV) grids at an increasingly rapid rate due to the growing adoption of power electronic-based home appliances in residential grid. Photovoltaic (PV) systems are considered one of the potential installed renewable energy sources in distribution power systems. This trend has led to high distortion in the supply voltage which consequently produces harmonic currents in the network and causes an inherent voltage unbalance. In order to investigate the effect of harmonic distortions, a case study of a typical LV grid configuration with high penetration of 3-phase and 1-phase rooftop mounted PV from southern Germany was first considered. Electromagnetic transient (EMT) simulations were then carried out under the MATLAB/Simulink environment which contain detailed models for power electronic-based loads, ohmic-based loads as well as 1- and 3-phase PV. Note that, the switching patterns of the power electronic circuits were considered in this study. Measurements were eventually performed to analyze the distortion levels when PV operating under different solar irradiance. The characteristics of the load-side harmonic impedances were analyzed, and their harmonic contributions were evaluated for different distortion levels. The effect of the high penetration of PV on the harmonic distortion of both positive and negative sequences was also investigated. The simulation results are presented based on case studies. The current distortion levels are in agreement with relevant standards, otherwise the Total Harmonic Distortion (THD) increases under low PV power generation due to its inverse relation with the fundamental current.

Keywords: harmonic distortion analysis, power quality, PV systems, residential distribution system

Procedia PDF Downloads 261
1094 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 66
1093 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 283
1092 Fluorescence-Based Biosensor for Dopamine Detection Using Quantum Dots

Authors: Sylwia Krawiec, Joanna Cabaj, Karol Malecha

Abstract:

Nowadays, progress in the field of the analytical methods is of great interest for reliable biological research and medical diagnostics. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements. Chemical sensors have displaced the conventional analytical methods - sensors combine precision, sensitivity, fast response and the possibility of continuous-monitoring. Biosensor is a chemical sensor, which except of conventer also possess a biologically active material, which is the basis for the detection of specific chemicals in the sample. Each biosensor device mainly consists of two elements: a sensitive element, where is recognition of receptor-analyte, and a transducer element which receives the signal and converts it into a measurable signal. Through these two elements biosensors can be divided in two categories: due to the recognition element (e.g immunosensor) and due to the transducer (e.g optical sensor). Working of optical sensor is based on measurements of quantitative changes of parameters characterizing light radiation. The most often analyzed parameters include: amplitude (intensity), frequency or polarization. Changes in the optical properties one of the compound which reacts with biological material coated on the sensor is analyzed by a direct method, in an indirect method indicators are used, which changes the optical properties due to the transformation of the testing species. The most commonly used dyes in this method are: small molecules with an aromatic ring, like rhodamine, fluorescent proteins, for example green fluorescent protein (GFP), or nanoparticles such as quantum dots (QDs). Quantum dots have, in comparison with organic dyes, much better photoluminescent properties, better bioavailability and chemical inertness. These are semiconductor nanocrystals size of 2-10 nm. This very limited number of atoms and the ‘nano’-size gives QDs these highly fluorescent properties. Rapid and sensitive detection of dopamine is extremely important in modern medicine. Dopamine is very important neurotransmitter, which mainly occurs in the brain and central nervous system of mammals. Dopamine is responsible for the transmission information of moving through the nervous system and plays an important role in processes of learning or memory. Detection of dopamine is significant for diseases associated with the central nervous system such as Parkinson or schizophrenia. In developed optical biosensor for detection of dopamine, are used graphene quantum dots (GQDs). In such sensor dopamine molecules coats the GQD surface - in result occurs quenching of fluorescence due to Resonance Energy Transfer (FRET). Changes in fluorescence correspond to specific concentrations of the neurotransmitter in tested sample, so it is possible to accurately determine the concentration of dopamine in the sample.

Keywords: biosensor, dopamine, fluorescence, quantum dots

Procedia PDF Downloads 359
1091 CsPbBr₃@MOF-5-Based Single Drop Microextraction for in-situ Fluorescence Colorimetric Detection of Dechlorination Reaction

Authors: Yanxue Shang, Jingbin Zeng

Abstract:

Chlorobenzene homologues (CBHs) are a category of environmental pollutants that can not be ignored. They can stay in the environment for a long period and are potentially carcinogenic. The traditional degradation method of CBHs is dechlorination followed by sample preparation and analysis. This is not only time-consuming and laborious, but the detection and analysis processes are used in conjunction with large-scale instruments. Therefore, this can not achieve rapid and low-cost detection. Compared with traditional sensing methods, colorimetric sensing is simpler and more convenient. In recent years, chromaticity sensors based on fluorescence have attracted more and more attention. Compared with sensing methods based on changes in fluorescence intensity, changes in color gradients are easier to recognize by the naked eye. Accordingly, this work proposes to use single drop microextraction (SDME) technology to solve the above problems. After the dechlorination reaction was completed, the organic droplet extracts Cl⁻ and realizes fluorescence colorimetric sensing at the same time. This method was integrated sample processing and visual in-situ detection, simplifying the detection process. As a fluorescence colorimetric sensor material, CsPbBr₃ was encapsulated in MOF-5 to construct CsPbBr₃@MOF-5 fluorescence colorimetric composite. Then the fluorescence colorimetric sensor was constructed by dispersing the composite in SDME organic droplets. When the Br⁻ in CsPbBr₃ exchanges with Cl⁻ produced by the dechlorination reactions, it is converted into CsPbCl₃. The fluorescence color of the single droplet of SDME will change from green to blue emission, thereby realizing visual observation. Therein, SDME can enhance the concentration and enrichment of Cl⁻ and instead of sample pretreatment. The fluorescence color change of CsPbBr₃@MOF-5 can replace the detection process of large-scale instruments to achieve real-time rapid detection. Due to the absorption ability of MOF-5, it can not only improve the stability of CsPbBr₃, but induce the adsorption of Cl⁻. Simultaneously, accelerate the exchange of Br- and Cl⁻ in CsPbBr₃ and the detection process of Cl⁻. The absorption process was verified by density functional theory (DFT) calculations. This method exhibits exceptional linearity for Cl⁻ in the range of 10⁻² - 10⁻⁶ M (10000 μM - 1 μM) with a limit of detection of 10⁻⁷ M. Whereafter, the dechlorination reactions of different kinds of CBHs were also carried out with this method, and all had satisfactory detection ability. Also verified the accuracy by gas chromatography (GC), and it was found that the SDME we developed in this work had high credibility. In summary, the in-situ visualization method of dechlorination reaction detection was a combination of sample processing and fluorescence colorimetric sensing. Thus, the strategy researched herein represents a promising method for the visual detection of dechlorination reactions and can be extended for applications in environments, chemical industries, and foods.

Keywords: chlorobenzene homologues, colorimetric sensor, metal halide perovskite, metal-organic frameworks, single drop microextraction

Procedia PDF Downloads 140
1090 Computational Investigation of V599 Mutations of BRAF Protein and Its Control over the Therapeutic Outcome under the Malignant Condition

Authors: Mayank, Navneet Kaur, Narinder Singh

Abstract:

The V599 mutations in the BRAF protein are extremely oncogenic, responsible for countless of malignant conditions. Along with wild type, V599E, V599D, and V599R are the important mutated variants of the BRAF proteins. The BRAF inhibitory anticancer agents are continuously developing, and sorafenib is a BRAF inhibitor that is under clinical use. The crystal structure of sorafenib bounded to wild type, and V599 is known, showing a similar interaction pattern in both the case. The mutated 599th residue, in both the case, is also found not interacting directly with the co-crystallized sorafenib molecule. However, the IC50 value of sorafenib was found extremely different in both the case, i.e., 22 nmol/L for wild and 38 nmol/L for V599E protein. Molecular docking study and MMGBSA binding energy results also revealed a significant difference in the binding pattern of sorafenib in both the case. Therefore, to explore the role of distinctively situated 599th residue, we have further conducted comprehensive computational studies. The molecular dynamics simulation, residue interaction network (RIN) analysis, and residue correlation study results revealed the importance of the 599th residue on the therapeutic outcome and overall dynamic of the BRAF protein. Therefore, although the position of 599th residue is very much distinctive from the ligand-binding cavity of BRAF, still it has exceptional control over the overall functional outcome of the protein. The insight obtained here may seem extremely important and guide us while designing ideal BRAF inhibitory anticancer molecules.

Keywords: BRAF, oncogenic, sorafenib, computational studies

Procedia PDF Downloads 111
1089 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images

Authors: Sophia Shi

Abstract:

Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.

Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG

Procedia PDF Downloads 129
1088 Acoustic Energy Harvesting Using Polyvinylidene Fluoride (PVDF) and PVDF-ZnO Piezoelectric Polymer

Authors: S. M. Giripunje, Mohit Kumar

Abstract:

Acoustic energy that exists in our everyday life and environment have been overlooked as a green energy that can be extracted, generated, and consumed without any significant negative impact to the environment. The harvested energy can be used to enable new technology like wireless sensor networks. Technological developments in the realization of truly autonomous MEMS devices and energy storage systems have made acoustic energy harvesting (AEH) an increasingly viable technology. AEH is the process of converting high and continuous acoustic waves from the environment into electrical energy by using an acoustic transducer or resonator. AEH is not popular as other types of energy harvesting methods since sound waves have lower energy density and such energy can only be harvested in very noisy environment. However, the energy requirements for certain applications are also correspondingly low and also there is a necessity to observe the noise to reduce noise pollution. So the ability to reclaim acoustic energy and store it in a usable electrical form enables a novel means of supplying power to relatively low power devices. A quarter-wavelength straight-tube acoustic resonator as an acoustic energy harvester is introduced with polyvinylidene fluoride (PVDF) and PVDF doped with ZnO nanoparticles, piezoelectric cantilever beams placed inside the resonator. When the resonator is excited by an incident acoustic wave at its first acoustic eigen frequency, an amplified acoustic resonant standing wave is developed inside the resonator. The acoustic pressure gradient of the amplified standing wave then drives the vibration motion of the PVDF piezoelectric beams, generating electricity due to the direct piezoelectric effect. In order to maximize the amount of the harvested energy, each PVDF and PVDF-ZnO piezoelectric beam has been designed to have the same structural eigen frequency as the acoustic eigen frequency of the resonator. With a single PVDF beam placed inside the resonator, the harvested voltage and power become the maximum near the resonator tube open inlet where the largest acoustic pressure gradient vibrates the PVDF beam. As the beam is moved to the resonator tube closed end, the voltage and power gradually decrease due to the decreased acoustic pressure gradient. Multiple piezoelectric beams PVDF and PVDF-ZnO have been placed inside the resonator with two different configurations: the aligned and zigzag configurations. With the zigzag configuration which has the more open path for acoustic air particle motions, the significant increases in the harvested voltage and power have been observed. Due to the interruption of acoustic air particle motion caused by the beams, it is found that placing PVDF beams near the closed tube end is not beneficial. The total output voltage of the piezoelectric beams increases linearly as the incident sound pressure increases. This study therefore reveals that the proposed technique used to harvest sound wave energy has great potential of converting free energy into useful energy.

Keywords: acoustic energy, acoustic resonator, energy harvester, eigenfrequency, polyvinylidene fluoride (PVDF)

Procedia PDF Downloads 379
1087 Self-Organizing Maps for Credit Card Fraud Detection and Visualization

Authors: Peng Chun-Yi, Chen Wei-Hsuan, Ueng Shyh-Kuang

Abstract:

This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.

Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies

Procedia PDF Downloads 55
1086 The Minimum Patch Size Scale for Seagrass Canopy Restoration

Authors: Aina Barcelona, Carolyn Oldham, Jordi Colomer, Teresa Serra

Abstract:

The loss of seagrass meadows worldwide is being tackled by formulating coastal restoration strategies. Seagrass loss results in a network of vegetated patches which are barely interconnected, and consequently, the ecological services they provide may be highly compromised. Hence, there is a need to optimize coastal management efforts in order to implement successful restoration strategies, not only through modifying the architecture of the canopies but also by gathering together information on the hydrodynamic conditions of the seabeds. To obtain information on the hydrodynamics within the patches of vegetation, this study deals with the scale analysis of the minimum lengths of patch management strategies that can be effectively used on. To this aim, a set of laboratory experiments were conducted in a laboratory flume where the plant densities, patch lengths, and hydrodynamic conditions were varied to discern the vegetated patch lengths that can provide optimal ecosystem services for canopy development. Two possible patch behaviours based on the turbulent kinetic energy (TKE) production were determined: one where plants do not interact with the flow and the other where plants interact with waves and produce TKE. Furthermore, this study determines the minimum patch lengths that can provide successful management restoration. A canopy will produce TKE, depending on its density, the length of the vegetated patch, and the wave velocities. Therefore, a vegetated patch will produce plant-wave interaction under high wave velocities when it presents large lengths and high canopy densities.

Keywords: seagrass, minimum patch size, turbulent kinetic energy, oscillatory flow

Procedia PDF Downloads 193
1085 Environmental Literacy of Teacher Educators in Colleges of Teacher Education in Israel

Authors: Tzipi Eshet

Abstract:

The importance of environmental education as part of a national strategy to promote the environment is recognized around the world. Lecturers at colleges of teacher education have considerable responsibility, directly and indirectly, for the environmental literacy of students who will end up teaching in the school system. This study examined whether lecturers in colleges of teacher education and teacher training in Israel, are able and willing to develop among the students, environmental literacy. Capability and readiness is assessed by evaluating the level of environmental literacy dimensions that include knowledge on environmental issues, positions related to the environmental agenda and "green" patterns of behavior in everyday life. The survey included 230 lecturers from 22 state colleges coming from various sectors (secular, religious, and Arab), from different academic fields and different personal backgrounds. Firstly, the results show that the higher the commitment to environmental issues, the lower the satisfaction with the current situation. In general, the respondents show positive environmental attitudes in all categories examined, they feel that they can personally influence responsible environmental behavior of others and are able to internalize environmental education in schools and colleges; they also report positive environmental behavior. There are no significant differences between teachers of different background characteristics when it comes to behavior patterns that generate personal income funds (e.g. returning bottles for deposit). Women show a more responsible environmental behavior than men. Jewish lecturers, in most categories, show more responsible behavior than Druze and Arab lecturers; however, when referring to positions, Arabs and Druze have a better sense in their ability to influence the environmental agenda. The Knowledge test, which included 15 questions, was mostly based on basic environmental issues. The average score was adequate - 83.6. Science lecturers' environmental literacy is higher than the other lecturers significantly. The larger the environmental knowledge base is, they are more environmental in their attitudes, and they feel more responsible toward the environment. It can be concluded from the research findings, that knowledge is a fundamental basis for developing environmental literacy. Environmental knowledge has a positive effect on the development of environmental commitment that is reflected in attitudes and behavior. This conclusion is probably also true of the general public. Hence, there is a great importance to the expansion of knowledge among the general public and teacher educators in particular on environmental. From the open questions in the survey, it is evident that most of the lecturers are interested in the subject and understand the need to integrate environmental issues in the colleges, either directly by teaching courses on the environment or indirectly by integrating environmental issues in different professions as well as asking the students to set an example (such as, avoid unnecessary printing, keeping the environment clean). The curriculum at colleges should include a variety of options for the development and enhancement of environmental literacy of student teachers, but first there must be a focus on bringing their teachers to a high literacy level so they can meet the difficult and important task they face.

Keywords: colleges of teacher education, environmental literacy, environmental education, teacher's teachers

Procedia PDF Downloads 278
1084 A Multi Objective Reliable Location-Inventory Capacitated Disruption Facility Problem with Penalty Cost Solve with Efficient Meta Historic Algorithms

Authors: Elham Taghizadeh, Mostafa Abedzadeh, Mostafa Setak

Abstract:

Logistics network is expected that opened facilities work continuously for a long time horizon without any failure; but in real world problems, facilities may face disruptions. This paper studies a reliable joint inventory location problem to optimize cost of facility locations, customers’ assignment, and inventory management decisions when facilities face failure risks and doesn’t work. In our model we assume when a facility is out of work, its customers may be reassigned to other operational facilities otherwise they must endure high penalty costs associated with losing service. For defining the model closer to real world problems, the model is proposed based on p-median problem and the facilities are considered to have limited capacities. We define a new binary variable (Z_is) for showing that customers are not assigned to any facilities. Our problem involve a bi-objective model; the first one minimizes the sum of facility construction costs and expected inventory holding costs, the second one function that mention for the first one is minimizes maximum expected customer costs under normal and failure scenarios. For solving this model we use NSGAII and MOSS algorithms have been applied to find the pareto- archive solution. Also Response Surface Methodology (RSM) is applied for optimizing the NSGAII Algorithm Parameters. We compare performance of two algorithms with three metrics and the results show NSGAII is more suitable for our model.

Keywords: joint inventory-location problem, facility location, NSGAII, MOSS

Procedia PDF Downloads 521
1083 An IoT-Enabled Crop Recommendation System Utilizing Message Queuing Telemetry Transport (MQTT) for Efficient Data Transmission to AI/ML Models

Authors: Prashansa Singh, Rohit Bajaj, Manjot Kaur

Abstract:

In the modern agricultural landscape, precision farming has emerged as a pivotal strategy for enhancing crop yield and optimizing resource utilization. This paper introduces an innovative Crop Recommendation System (CRS) that leverages the Internet of Things (IoT) technology and the Message Queuing Telemetry Transport (MQTT) protocol to collect critical environmental and soil data via sensors deployed across agricultural fields. The system is designed to address the challenges of real-time data acquisition, efficient data transmission, and dynamic crop recommendation through the application of advanced Artificial Intelligence (AI) and Machine Learning (ML) models. The CRS architecture encompasses a network of sensors that continuously monitor environmental parameters such as temperature, humidity, soil moisture, and nutrient levels. This sensor data is then transmitted to a central MQTT server, ensuring reliable and low-latency communication even in bandwidth-constrained scenarios typical of rural agricultural settings. Upon reaching the server, the data is processed and analyzed by AI/ML models trained to correlate specific environmental conditions with optimal crop choices and cultivation practices. These models consider historical crop performance data, current agricultural research, and real-time field conditions to generate tailored crop recommendations. This implementation gets 99% accuracy.

Keywords: Iot, MQTT protocol, machine learning, sensor, publish, subscriber, agriculture, humidity

Procedia PDF Downloads 60
1082 Nano-Filled Matrix Reinforced by Woven Carbon Fibers Used as a Sensor

Authors: K. Hamdi, Z. Aboura, W. Harizi, K. Khellil

Abstract:

Improving the electrical properties of organic matrix composites has been investigated in several studies. Thus, to extend the use of composites in more varied application, one of the actual barrier is their poor electrical conductivities. In the case of carbon fiber composites, organic matrix are in charge of the insulating properties of the resulting composite. However, studying the properties of continuous carbon fiber nano-filled composites is less investigated. This work tends to characterize the effect of carbon black nano-fillers on the properties of the woven carbon fiber composites. First of all, SEM observations were performed to localize the nano-particles. It showed that particles penetrated on the fiber zone (figure1). In fact, by reaching the fiber zone, the carbon black nano-fillers created network connectivity between fibers which means an easy pathway for the current. It explains the noticed improvement of the electrical conductivity of the composites by adding carbon black. This test was performed with the four points electrical circuit. It shows that electrical conductivity of 'neat' matrix composite passed from 80S/cm to 150S/cm by adding 9wt% of carbon black and to 250S/cm by adding 17wt% of the same nano-filler. Thanks to these results, the use of this composite as a strain gauge might be possible. By the way, the study of the influence of a mechanical excitation (flexion, tensile) on the electrical properties of the composite by recording the variance of an electrical current passing through the material during the mechanical testing is possible. Three different configuration were performed depending on the rate of carbon black used as nano-filler. These investigation could lead to develop an auto-instrumented material.

Keywords: carbon fibers composites, nano-fillers, strain-sensors, auto-instrumented

Procedia PDF Downloads 406