Search results for: Artificial Bee Colony algorithm
2746 Investigating Water-Oxidation Using a Ru(III) Carboxamide Water Coordinated Complex
Authors: Yosra M. Badiei, Evelyn Ortiz, Marisa Portenti, David Szalda
Abstract:
Water-oxidation half-reaction is a critical reaction that can be driven by a sustainable energy source (e.g., solar or wind) and be coupled with a chemical fuel making reaction which stores the released electrons and protons from water (e.g., H₂ or methanol). The use of molecular water-oxidation catalysts (WOC) allow the rationale design of redox active metal centers and provides a better understanding of their structure-activity-relationship. Herein, the structure of a Ru(III) complex bearing a doubly deprotonated N,N'-bis(aryl)pyridine-2,6-dicarboxamide ligand which contains a water molecule in its primary coordination sphere was elucidated by single-crystal X-ray diffraction. Further spectroscopic experimental data and pH-dependent electrochemical studies reveal its water-oxidation reactivity. Emphasis on mechanistic details for O₂ formation of this complex will be addressed.Keywords: water-oxidation, catalysis, ruthenium, artificial photosynthesis
Procedia PDF Downloads 1992745 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet
Authors: Justin Woulfe
Abstract:
Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics
Procedia PDF Downloads 1582744 Forecasting Residential Water Consumption in Hamilton, New Zealand
Authors: Farnaz Farhangi
Abstract:
Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model
Procedia PDF Downloads 3362743 Regulation of the Regeneration of Epidermal Langerhans Cells by Stress Hormone
Authors: Junichi Hosoi
Abstract:
Epidermal Langerhans cells reside in upper layer of epidermis and play a role in immune surveillance. The finding of the close association of nerve endings to Langerhans cells triggered the research on systemic regulation of Langerhans cells. They disappear from epidermis after exposure to environmental and internal stimuli and reappear about a week later. Myeloid progenitor cells are assumed to be one of the sources of Langerhans cells. We examined the effects of cortisol on the reappearance of Langerhans cells in vitro. Cord-blood derived CD34-positive cells were cultured in the medium supplemented with stem cell factor/Flt3 ligand/granulocyte macrophage-colony stimulating factor/tumor necrosis factor alpha/bone morphologic protein 7/transforming growth factor beta in the presence or absence of cortisol. Cells were analyzed by flow cytometry for CD1a (cluster differentiation 1a), a marker of Langerhans cells and dermal dendritic cells, and CD39 (cluster differentiation factor 39), extracellular adenosine triphosphatase. Both CD1a-positive cells and CD39-positive cells were decreased by treatment with cortisol (suppression by 35% and 22% compared to no stress hormone, respectively). Differentiated Langerhans cells are attracted to epidermis by chemokines that are secreted from keratinocytes. Epidermal keratinocytes were cultured in the presence or absence of cortisol and analyzed for the expression of CCL2 (C-C motif chemokine ligand 2) and CCL20 (C-C motif chemokine ligand 20), which are typical attractants of Langerhans cells, by quantitative reverse transcriptase polymerase chain reaction. The expression of both chemokines, CCL2 and CCL20, were suppressed by treatment with cortisol (suppression by 38% and 48% compared to no stress hormone, respectively). We examined the possible regulation of the suppression by cortisol with plant extracts. The extracts of Ganoderma lucidum and Iris protected the suppression of the differentiation to CD39-positive cells and also the suppression of the gene expression of LC-chemoattractants. These results suggest that cortisol, which is either systemic or locally produced, blocks the supply of epidermal Langerhans cells at 2 steps, differentiation from the precursor and attraction to epidermis. The suppression is possibly blocked by some plant extracts.Keywords: Langerhans cell, stress, CD39, chemokine
Procedia PDF Downloads 1852742 Gravitational Water Vortex Power Plant: Experimental-Parametric Design of a Hydraulic Structure Capable of Inducing the Artificial Formation of a Gravitational Water Vortex Appropriate for Hydroelectric Generation
Authors: Henrry Vicente Rojas Asuero, Holger Manuel Benavides Muñoz
Abstract:
Approximately 80% of the energy consumed worldwide is generated from fossil sources, which are responsible for the emission of a large volume of greenhouse gases. For this reason, the global trend, at present, is the widespread use of energy produced from renewable sources. This seeks safety and diversification of energy supply, based on social cohesion, economic feasibility and environmental protection. In this scenario, small hydropower systems (P ≤ 10MW) stand out due to their high efficiency, economic competitiveness and low environmental impact. Small hydropower systems, along with wind and solar energy, are expected to represent a significant percentage of the world's energy matrix in the near term. Among the various technologies present in the state of the art, relating to small hydropower systems, is the Gravitational Water Vortex Power Plant, a recent technology that excels because of its versatility of operation, since it can operate with jumps in the range of 0.70 m-2.00 m and flow rates from 1 m3/s to 20 m3/s. Its operating system is based on the utilization of the energy of rotation contained within a large water vortex artificially induced. This paper presents the study and experimental design of an optimal hydraulic structure with the capacity to induce the artificial formation of a gravitational water vortex trough a system of easy application and high efficiency, able to operate in conditions of very low head and minimum flow. The proposed structure consists of a channel, with variable base, vortex inductor, tangential flow generator, coupled to a circular tank with a conical transition bottom hole. In the laboratory test, the angular velocity of the water vortex was related to the geometric characteristics of the inductor channel, as well as the influence of the conical transition bottom hole on the physical characteristics of the water vortex. The results show angular velocity values of greater magnitude as a function of depth, in addition the presence of the conical transition in the bottom hole of the circular tank improves the water vortex formation conditions while increasing the angular velocity values. Thus, the proposed system is a sustainable solution for the energy supply of rural areas near to watercourses.Keywords: experimental model, gravitational water vortex power plant, renewable energy, small hydropower
Procedia PDF Downloads 2882741 Identifying the Hidden Curriculum Components in the Nursing Education
Authors: Alice Khachian, Shoaleh Bigdeli, Azita Shoghie, Leili Borimnejad
Abstract:
Background and aim: The hidden curriculum is crucial in nursing education and can determine professionalism and professional competence. It has a significant effect on their moral performance in relation to patients. The present study was conducted with the aim of identifying the hidden curriculum components in the nursing and midwifery faculty. Methodology: The ethnographic study was conducted over two years using the Spradley method in one of the nursing schools located in Tehran. In this focused ethnographic research, the approach of Lincoln and Goba, i.e., transferability, confirmability, and dependability, was used. To increase the validity of the data, they were collected from different sources, such as participatory observation, formal and informal interviews, and document review. Two hundred days of participatory observation, fifty informal interviews, and fifteen formal interviews from the maximum opportunities and conditions available to obtain multiple and multilateral information added to the validity of the data. Due to the situation of COVID, some interviews were conducted virtually, and the activity of professors and students in the virtual space was also monitored. Findings: The components of the hidden curriculum of the faculty are: the atmosphere (physical environment, organizational structure, rules and regulations, hospital environment), the interaction between activists, and teaching-learning activities, which ultimately lead to “A disconnection between goals, speech, behavior, and result” had revealed. Conclusion: The mutual effects of the atmosphere and various actors and activities on the process of student development, since the students have the most contact with their peers first, which leads to the most learning, and secondly with the teachers. Clinicians who have close and person-to-person contact with students can have very important effects on students. Students who meet capable and satisfied professors on their way become interested in their field and hope for their future by following the mentor of these professors. On the other hand, weak and dissatisfied professors lead students to feel abandoned, and by forming a colony of peers with different backgrounds, they distort the personality of a group of students and move away from family values, which necessitates a change in some cultural practices at the faculty level.Keywords: hidden curriculum, nursing education, ethnography, nursing
Procedia PDF Downloads 1082740 Aging Behaviour of 6061 Al-15 vol% SiC Composite in T4 and T6 Treatments
Authors: Melby Chacko, Jagannath Nayak
Abstract:
The aging behaviour of 6061 Al-15 vol% SiC composite was investigated using Rockwell B hardness measurement. The composite was solutionized at 350°C and quenched in water. The composite was aged at room temperature (T4 treatment) and also at 140°C, 160°C, 180°C and 200°C (T6 treatment). The natural and artificial aging behaviour of composite was studied using aging curves determined at different temperatures. The aging period for peak aging for different temperatures was identified. The time required for attaining peak aging decreased with increase in the aging temperature. The peak hardness was found to increase with increase with aging temperature and the highest peak hardness was observed at 180ºC. Beyond 180ºC the peak hardness was found to be decreasing.Keywords: 6061 Al-SiC composite, aging curve, Rockwell B hardness, T4, T6 treatments
Procedia PDF Downloads 2662739 The Application of Artificial Neural Network for Bridge Structures Design Optimization
Authors: Angga S. Fajar, A. Aminullah, J. Kiyono, R. A. Safitri
Abstract:
This paper discusses about the application of ANN for optimizing of bridge structure design. ANN has been applied in various field of science concerning prediction and optimization. The structural optimization has several benefit including accelerate structural design process, saving the structural material, and minimize self-weight and mass of structure. In this paper, there are three types of bridge structure that being optimized including PSC I-girder superstructure, composite steel-concrete girder superstructure, and RC bridge pier. The different optimization strategy on each bridge structure implement back propagation method of ANN is conducted in this research. The optimal weight and easier design process of bridge structure with satisfied error are achieved.Keywords: bridge structures, ANN, optimization, back propagation
Procedia PDF Downloads 3702738 Effect of Baffles on the Cooling of Electronic Components
Authors: O. Bendermel, C. Seladji, M. Khaouani
Abstract:
In this work, we made a numerical study of the thermal and dynamic behaviour of air in a horizontal channel with electronic components. The influence to use baffles on the profiles of velocity and temperature is discussed. The finite volume method and the algorithm Simple are used for solving the equations of conservation of mass, momentum and energy. The results found show that baffles improve heat transfer between the cooling air and electronic components. The velocity will increase from 3 times per rapport of the initial velocity.Keywords: electronic components, baffles, cooling, fluids engineering
Procedia PDF Downloads 2942737 Ecological Ice Hockey Butterfly Motion Assessment Using Inertial Measurement Unit Capture System
Authors: Y. Zhang, J. Perez, S. Marnier
Abstract:
To date, no study on goaltending butterfly motion has been completed in real conditions, during an ice hockey game or training practice, to the author's best knowledge. This motion, performed to save score, is unnatural, intense, and repeated. The target of this research activity is to identify representative biomechanical criteria for this goaltender-specific movement pattern. Determining specific physical parameters may allow to will identify the risk of hip and groin injuries sustained by goaltenders. Four professional or academic goalies were instrumented during ice hockey training practices with five inertial measurement units. These devices were inserted in dedicated pockets located on each thigh and shank, and the fifth on the lumbar spine. A camera was also installed close to the ice to observe and record the goaltenders' activities, especially the butterfly motions, in order to synchronize the captured data and the behavior of the goaltender. Each data recorded began with a calibration of the inertial units and a calibration of the fully equipped goaltender on the ice. Three butterfly motions were recorded out of the training practice to define referential individual butterfly motions. Then, a data processing algorithm based on the Madgwick filter computed hip and knee joints joint range of motion as well as angular specific angular velocities. The developed algorithm software automatically identified and analyzed all the butterfly motions executed by the four different goaltenders. To date, it is still too early to show that the analyzed criteria are representative of the trauma generated by the butterfly motion as the research is only at its beginning. However, this descriptive research activity is promising in its ecological assessment, and once the criteria are found, the tools and protocols defined will allow the prevention of as many injuries as possible. It will thus be possible to build a specific training program for each goalie.Keywords: biomechanics, butterfly motion, human motion analysis, ice hockey, inertial measurement unit
Procedia PDF Downloads 1242736 Challenges beyond the Singapore Future-Ready School ‘LEADER’ Qualities
Authors: Zoe Boon Suan Loy
Abstract:
An exploratory research undertaken in 2000 at the beginning of the COVID-19 pandemic examined the changing roles of Singapore school leaders as they lead teachers in developing future-ready learners. While it is evident that ‘LEADER’ qualities epitomize the knowledge, competencies, and skills required, recent events in an increasing VUCA and BANI world characterized by massively disruptive Ukraine -Russian war, unabating tense US-Sino relations, issues related to sustainability, and rapid ageing will have an impact on school leadership. As an increasingly complex endeavour, this requires a relook as they lead teachers in nurturing holistically-developed future-ready students. Digitalisation, new technology, and the push for a green economy will be the key driving forces that will have an impact on job availability. Similarly, the rapid growth of artificial intelligence (AI) capabilities, including ChatGPT, will aggravate and add tremendous stress to the work of school leaders. This paper seeks to explore the key school leadership shifts required beyond the ‘LEADER’ qualities as school leaders respond to the changes, challenges, and opportunities in the 21st C new normal. The research findings for this paper are based on an exploratory qualitative study on the perceptions of 26 school leaders (vice-principals) who were attending a milestone educational leadership course at the National Institute of Education, Nanyang Technological University, Singapore. A structured questionnaire is designed to collect the data, which is then analysed using coding methodology. Broad themes on key competencies and skills of future-ready leaders in the Singapore education system are then identified. Key Findings: In undertaking their leadership roles as leaders of future-ready learners, school leaders need to demonstrate the ‘LEADER’ qualities. They need to have a long-term view, understand the educational imperatives, have a good awareness of self and the dispositions of a leader, be effective in optimizing external leverages and are clear about their role expectations. These ‘LEADER’ qualities are necessary and relevant in the post-Covid era. Beyond this, school leaders with ‘LEADER’ qualities are well supported by the Ministry of Education, which takes cognizance of emerging trends and continually review education policies to address related issues. Concluding Statement: Discussions within the education ecosystem and among other stakeholders on the implications of the use of artificial intelligence and ChatGPT on the school curriculum, including content knowledge, pedagogy, and assessment, are ongoing. This augurs well for school leaders as they undertake their responsibilities as leaders of future-ready learners.Keywords: Singapore education system, ‘LEADER’ qualities, school leadership, future-ready leaders, future-ready learners
Procedia PDF Downloads 712735 Optimal Design of Wind Turbine Blades Equipped with Flaps
Authors: I. Kade Wiratama
Abstract:
As a result of the significant growth of wind turbines in size, blade load control has become the main challenge for large wind turbines. Many advanced techniques have been investigated aiming at developing control devices to ease blade loading. Amongst them, trailing edge flaps have been proven as effective devices for load alleviation. The present study aims at investigating the potential benefits of flaps in enhancing the energy capture capabilities rather than blade load alleviation. A software tool is especially developed for the aerodynamic simulation of wind turbines utilising blades equipped with flaps. As part of the aerodynamic simulation of these wind turbines, the control system must be also simulated. The simulation of the control system is carried out via solving an optimisation problem which gives the best value for the controlling parameter at each wind turbine run condition. Developing a genetic algorithm optimisation tool which is especially designed for wind turbine blades and integrating it with the aerodynamic performance evaluator, a design optimisation tool for blades equipped with flaps is constructed. The design optimisation tool is employed to carry out design case studies. The results of design case studies on wind turbine AWT 27 reveal that, as expected, the location of flap is a key parameter influencing the amount of improvement in the power extraction. The best location for placing a flap is at about 70% of the blade span from the root of the blade. The size of the flap has also significant effect on the amount of enhancement in the average power. This effect, however, reduces dramatically as the size increases. For constant speed rotors, adding flaps without re-designing the topology of the blade can improve the power extraction capability as high as of about 5%. However, with re-designing the blade pretwist the overall improvement can be reached as high as 12%.Keywords: flaps, design blade, optimisation, simulation, genetic algorithm, WTAero
Procedia PDF Downloads 3362734 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kumar Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform
Procedia PDF Downloads 1142733 Procedure to Optimize the Performance of Chemical Laser Using the Genetic Algorithm Optimizations
Authors: Mohammedi Ferhate
Abstract:
This work presents details of the study of the entire flow inside the facility where the exothermic chemical reaction process in the chemical laser cavity is analyzed. In our paper we will describe the principles of chemical lasers where flow reversal is produced by chemical reactions. We explain the device for converting chemical potential energy laser energy. We see that the phenomenon thus has an explosive trend. Finally, the feasibility and effectiveness of the proposed method is demonstrated by computer simulationKeywords: genetic, lasers, nozzle, programming
Procedia PDF Downloads 922732 Impact of Combined Heat and Power (CHP) Generation Technology on Distribution Network Development
Authors: Sreto Boljevic
Abstract:
In the absence of considerable investment in electricity generation, transmission and distribution network (DN) capacity, the demand for electrical energy will quickly strain the capacity of the existing electrical power network. With anticipated growth and proliferation of Electric vehicles (EVs) and Heat pump (HPs) identified the likelihood that the additional load from EV changing and the HPs operation will require capital investment in the DN. While an area-wide implementation of EVs and HPs will contribute to the decarbonization of the energy system, they represent new challenges for the existing low-voltage (LV) network. Distributed energy resources (DER), operating both as part of the DN and in the off-network mode, have been offered as a means to meet growing electricity demand while maintaining and ever-improving DN reliability, resiliency and power quality. DN planning has traditionally been done by forecasting future growth in demand and estimating peak load that the network should meet. However, new problems are arising. These problems are associated with a high degree of proliferation of EVs and HPs as load imposes on DN. In addition to that, the promotion of electricity generation from renewable energy sources (RES). High distributed generation (DG) penetration and a large increase in load proliferation at low-voltage DNs may have numerous impacts on DNs that create issues that include energy losses, voltage control, fault levels, reliability, resiliency and power quality. To mitigate negative impacts and at a same time enhance positive impacts regarding the new operational state of DN, CHP system integration can be seen as best action to postpone/reduce capital investment needed to facilitate promotion and maximize benefits of EVs, HPs and RES integration in low-voltage DN. The aim of this paper is to generate an algorithm by using an analytical approach. Algorithm implementation will provide a way for optimal placement of the CHP system in the DN in order to maximize the integration of RES and increase in proliferation of EVs and HPs.Keywords: combined heat & power (CHP), distribution networks, EVs, HPs, RES
Procedia PDF Downloads 2022731 Cooperative Agents to Prevent and Mitigate Distributed Denial of Service Attacks of Internet of Things Devices in Transportation Systems
Authors: Borhan Marzougui
Abstract:
Road and Transport Authority (RTA) is moving ahead with the implementation of the leader’s vision in exploring all avenues that may bring better security and safety services to the community. Smart transport means using smart technologies such as IoT (Internet of Things). This technology continues to affirm its important role in the context of Information and Transportation Systems. In fact, IoT is a network of Internet-connected objects able to collect and exchange different data using embedded sensors. With the growth of IoT, Distributed Denial of Service (DDoS) attacks is also growing exponentially. DDoS attacks are the major and a real threat to various transportation services. Currently, the defense mechanisms are mainly passive in nature, and there is a need to develop a smart technique to handle them. In fact, new IoT devices are being used into a botnet for DDoS attackers to accumulate for attacker purposes. The aim of this paper is to provide a relevant understanding of dangerous types of DDoS attack related to IoT and to provide valuable guidance for the future IoT security method. Our methodology is based on development of the distributed algorithm. This algorithm manipulates dedicated intelligent and cooperative agents to prevent and to mitigate DDOS attacks. The proposed technique ensure a preventive action when a malicious packets start to be distributed through the connected node (Network of IoT devices). In addition, the devices such as camera and radio frequency identification (RFID) are connected within the secured network, and the data generated by it are analyzed in real time by intelligent and cooperative agents. The proposed security system is based on a multi-agent system. The obtained result has shown a significant reduction of a number of infected devices and enhanced the capabilities of different security dispositives.Keywords: IoT, DDoS, attacks, botnet, security, agents
Procedia PDF Downloads 1412730 An Investigation Enhancing E-Voting Application Performance
Authors: Aditya Verma
Abstract:
E-voting using blockchain provides us with a distributed system where data is present on each node present in the network and is reliable and secure too due to its immutability property. This work compares various blockchain consensus algorithms used for e-voting applications in the past, based on performance and node scalability, and chooses the optimal one and improves on one such previous implementation by proposing solutions for the loopholes of the optimally working blockchain consensus algorithm, in our chosen application, e-voting.Keywords: blockchain, parallel bft, consensus algorithms, performance
Procedia PDF Downloads 1652729 Dry-Extrusion of Asian Carp, a Sustainable Source of Natural Methionine for Organic Poultry Production
Authors: I. Upadhyaya, K. Arsi, A. M. Donoghue, C. N. Coon, M. Schlumbohm, M. N. Riaz, M. B. Farnell, A. Upadhyay, A. J. Davis, D. J. Donoghue
Abstract:
Methionine, a sulfur containing amino acid, is essential for healthy poultry production. Synthetic methionine is commonly used as a supplement in conventional poultry. However, for organic poultry, a natural, cost effective source of methionine that can replace synthetic methionine is unavailable. Invasive Asian carp (AC) are a potential natural methionine source; however, there is no proven technology to utilize this fish methionine. Commercially available rendering is environmentally challenging due to the offensive smell produced during production. We explored extrusion technology as a potential cost effective alternative to fish rendering. We also determined the amino acid composition, digestible amino acids and total metabolizable energy (TMEn) for the extruded AC fish meal. Dry extrusion of AC was carried out by mixing the fish with soybean meal (SBM) in a 1:1 proportion to reduce high moisture in the fishmeal using an Insta Pro Jr. dry extruder followed by drying and grinding of the product. To determine the digestible amino acids and TMEn of the extruded product, a colony of cecectomized Bovans White Roosters was used. Adult roosters (48 weeks of age) were fasted for 30 h and tube fed 35 grams of 3 treatments: (1) extruded AC fish meal, (2) SBM and (3) corn. Excreta from each individual bird was collected for the next 48 h. An additional 10 unfed roosters served as endogenous controls. The gross energy and protein content of the feces from the treatments were determined to calculate the TMEn. Fecal samples and treatment feeds were analyzed for amino acid content and percent digestible amino acid. Results from the analysis suggested that addition of Asian carp increased the methionine content of SBM from 0.63 to 0.83%. Also, the digestibility of amino acid and the TMEn values were greater for the AC meal with SBM than SBM alone. The dry extruded AC meal analysis is indicative that the product can replace SBM alone and enhance natural methionine in a standard poultry ration. The results from feed formulation using different concentrations of the AC fish meal depict a potential diet which can supplement the required methionine content in organic poultry production.Keywords: Asian carp, extrusion, natural methionine, organic poultry
Procedia PDF Downloads 2152728 Life Prediction of Condenser Tubes Applying Fuzzy Logic and Neural Network Algorithms
Authors: A. Majidian
Abstract:
The life prediction of thermal power plant components is necessary to prevent the unexpected outages, optimize maintenance tasks in periodic overhauls and plan inspection tasks with their schedules. One of the main critical components in a power plant is condenser because its failure can affect many other components which are positioned in downstream of condenser. This paper deals with factors affecting life of condenser. Failure rates dependency vs. these factors has been investigated using Artificial Neural Network (ANN) and fuzzy logic algorithms. These algorithms have shown their capabilities as dynamic tools to evaluate life prediction of power plant equipments.Keywords: life prediction, condenser tube, neural network, fuzzy logic
Procedia PDF Downloads 3502727 Forecasting Solid Waste Generation in Turkey
Authors: Yeliz Ekinci, Melis Koyuncu
Abstract:
Successful planning of solid waste management systems requires successful prediction of the amount of solid waste generated in an area. Waste management planning can protect the environment and human health, hence it is tremendously important for countries. The lack of information in waste generation can cause many environmental and health problems. Turkey is a country that plans to join European Union, hence, solid waste management is one of the most significant criteria that should be handled in order to be a part of this community. Solid waste management system requires a good forecast of solid waste generation. Thus, this study aims to forecast solid waste generation in Turkey. Artificial Neural Network and Linear Regression models will be used for this aim. Many models will be run and the best one will be selected based on some predetermined performance measures.Keywords: forecast, solid waste generation, solid waste management, Turkey
Procedia PDF Downloads 5052726 Cryptographic Protocol for Secure Cloud Storage
Authors: Luvisa Kusuma, Panji Yudha Prakasa
Abstract:
Cloud storage, as a subservice of infrastructure as a service (IaaS) in Cloud Computing, is the model of nerworked storage where data can be stored in server. In this paper, we propose a secure cloud storage system consisting of two main components; client as a user who uses the cloud storage service and server who provides the cloud storage service. In this system, we propose the protocol schemes to guarantee against security attacks in the data transmission. The protocols are login protocol, upload data protocol, download protocol, and push data protocol, which implement hybrid cryptographic mechanism based on data encryption before it is sent to the cloud, so cloud storage provider does not know the user's data and cannot analysis user’s data, because there is no correspondence between data and user.Keywords: cloud storage, security, cryptographic protocol, artificial intelligence
Procedia PDF Downloads 3552725 Impact Location From Instrumented Mouthguard Kinematic Data In Rugby
Authors: Jazim Sohail, Filipe Teixeira-Dias
Abstract:
Mild traumatic brain injury (mTBI) within non-helmeted contact sports is a growing concern due to the serious risk of potential injury. Extensive research is being conducted looking into head kinematics in non-helmeted contact sports utilizing instrumented mouthguards that allow researchers to record accelerations and velocities of the head during and after an impact. This does not, however, allow the location of the impact on the head, and its magnitude and orientation, to be determined. This research proposes and validates two methods to quantify impact locations from instrumented mouthguard kinematic data, one using rigid body dynamics, the other utilizing machine learning. The rigid body dynamics technique focuses on establishing and matching moments from Euler’s and torque equations in order to find the impact location on the head. The methodology is validated with impact data collected from a lab test with the dummy head fitted with an instrumented mouthguard. Additionally, a Hybrid III Dummy head finite element model was utilized to create synthetic kinematic data sets for impacts from varying locations to validate the impact location algorithm. The algorithm calculates accurate impact locations; however, it will require preprocessing of live data, which is currently being done by cross-referencing data timestamps to video footage. The machine learning technique focuses on eliminating the preprocessing aspect by establishing trends within time-series signals from instrumented mouthguards to determine the impact location on the head. An unsupervised learning technique is used to cluster together impacts within similar regions from an entire time-series signal. The kinematic signals established from mouthguards are converted to the frequency domain before using a clustering algorithm to cluster together similar signals within a time series that may span the length of a game. Impacts are clustered within predetermined location bins. The same Hybrid III Dummy finite element model is used to create impacts that closely replicate on-field impacts in order to create synthetic time-series datasets consisting of impacts in varying locations. These time-series data sets are used to validate the machine learning technique. The rigid body dynamics technique provides a good method to establish accurate impact location of impact signals that have already been labeled as true impacts and filtered out of the entire time series. However, the machine learning technique provides a method that can be implemented with long time series signal data but will provide impact location within predetermined regions on the head. Additionally, the machine learning technique can be used to eliminate false impacts captured by sensors saving additional time for data scientists using instrumented mouthguard kinematic data as validating true impacts with video footage would not be required.Keywords: head impacts, impact location, instrumented mouthguard, machine learning, mTBI
Procedia PDF Downloads 2162724 Artificial Intelligence Based Meme Generation Technology for Engaging Audience in Social Media
Authors: Andrew Kurochkin, Kostiantyn Bokhan
Abstract:
In this study, a new meme dataset of ~650K meme instances was created, a technology of meme generation based on the state of the art deep learning technique - GPT-2 model was researched, a comparative analysis of machine-generated memes and human-created was conducted. We justified that Amazon Mechanical Turk workers can be used for the approximate estimating of users' behavior in a social network, more precisely to measure engagement. It was shown that generated memes cause the same engagement as human memes that produced low engagement in the social network (historically). Thus, generated memes are less engaging than random memes created by humans.Keywords: content generation, computational social science, memes generation, Reddit, social networks, social media interaction
Procedia PDF Downloads 1372723 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion
Authors: Ali Kazemi
Abstract:
Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting
Procedia PDF Downloads 632722 Dinitrotoluene and Trinitrotoluene Measuring in Double-Base Solid Propellants
Authors: Z. H. Safari, M. Anbia, G. H. Kouzegari, R. Amirkhani
Abstract:
Toluene and Nitro derivatives are widely used in industry particularly in various defense applications. Tri-nitro-toluene derivative is a powerful basic explosive material that is a basis upon which to compare equivalent explosive power of similar materials. The aim of this paper is to measure the explosive power of these hazardous substances in fuels having different shelf-life and therefore optimizing their storage and maintenance. The methodology involves measuring the amounts of di- nitro- toluene and tri-nitro-toluene in the aged samples at 90 ° C by gas chromatography. Results show no significant difference in the concentration of the TNT compound over a given time while there was a significant difference in DNT compound over the same period. The underlying reason is attributed to the simultaneous production of the material with destruction of stabilizer.Keywords: dinitrotoluene, trinitrotoluene, double-base solid propellants, artificial aging
Procedia PDF Downloads 4012721 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)
Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula
Abstract:
This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.Keywords: MINLP, mixed-integer non-linear programming, optimization, structures
Procedia PDF Downloads 452720 Design, Analysis and Obstacle Avoidance Control of an Electric Wheelchair with Sit-Sleep-Seat Elevation Functions
Authors: Waleed Ahmed, Huang Xiaohua, Wilayat Ali
Abstract:
The wheelchair users are generally exposed to physical and psychological health problems, e.g., pressure sores and pain in the hip joint, associated with seating posture or being inactive in a wheelchair for a long time. Reclining Wheelchair with back, thigh, and leg adjustment helps in daily life activities and health preservation. The seat elevating function of an electric wheelchair allows the user (lower limb amputation) to reach different heights. An electric wheelchair is expected to ease the lives of the elderly and disable people by giving them mobility support and decreasing the percentage of accidents caused by users’ narrow sight or joystick operation errors. Thus, this paper proposed the design, analysis and obstacle avoidance control of an electric wheelchair with sit-sleep-seat elevation functions. A 3D model of a wheelchair is designed in SolidWorks that was later used for multi-body dynamic (MBD) analysis and to verify driving control system. The control system uses the fuzzy algorithm to avoid the obstacle by getting information in the form of distance from the ultrasonic sensor and user-specified direction from the joystick’s operation. The proposed fuzzy driving control system focuses on the direction and velocity of the wheelchair. The wheelchair model has been examined and proven in MSC Adams (Automated Dynamic Analysis of Mechanical Systems). The designed fuzzy control algorithm is implemented on Gazebo robotic 3D simulator using Robotic Operating System (ROS) middleware. The proposed wheelchair design enhanced mobility and quality of life by improving the user’s functional capabilities. Simulation results verify the non-accidental behavior of the electric wheelchair.Keywords: fuzzy logic control, joystick, multi body dynamics, obstacle avoidance, scissor mechanism, sensor
Procedia PDF Downloads 1272719 Enhancing AI for Global Impact: Conversations on Improvement and Societal Benefits
Authors: C. P. Chukwuka, E. V. Chukwuka, F. Ukwadi
Abstract:
This paper focuses on the advancement and societal impact of artificial intelligence (AI) systems. It explores the need for a theoretical framework in corporate governance, specifically in the context of 'hybrid' companies that have a mix of private and government ownership. The paper emphasizes the potential of AI to address challenges faced by these companies and highlights the importance of the less-explored state model in corporate governance. The aim of this research is to enhance AI systems for global impact and positive societal outcomes. It aims to explore the role of AI in refining corporate governance in hybrid companies and uncover nuanced insights into complex ownership structures. The methodology involves leveraging the capabilities of AI to address the challenges faced by hybrid companies in corporate governance. The researchers will analyze existing theoretical frameworks in corporate governance and integrate AI systems to improve problem-solving and understanding of intricate systems. The paper suggests that improved AI systems have the potential to shape a more informed and responsible corporate landscape. AI can uncover nuanced insights and navigate complex ownership structures in hybrid companies, leading to greater efficacy and positive societal outcomes. The theoretical importance of this research lies in the exploration of the role of AI in corporate governance, particularly in the context of hybrid companies. By integrating AI systems, the paper highlights the potential for improved problem-solving and understanding of intricate systems, contributing to a more informed and responsible corporate landscape. The data for this research will be collected from existing literature on corporate governance, specifically focusing on hybrid companies. Additionally, data on AI capabilities and their application in corporate governance will be collected. The collected data will be analyzed through a systematic review of existing theoretical frameworks in corporate governance. The researchers will also analyze the capabilities of AI systems and their potential application in addressing the challenges faced by hybrid companies. The findings will be synthesized and compared to identify patterns and potential improvements. The research concludes that AI systems have the potential to enhance corporate governance in hybrid companies, leading to greater efficacy and positive societal outcomes. By leveraging AI capabilities, nuanced insights can be uncovered, and complex ownership structures can be navigated, shaping a more informed and responsible corporate landscape. The findings highlight the importance of integrating AI in refining problem-solving and understanding intricate systems for global impact.Keywords: advancement, artificial intelligence, challenges, societal impact
Procedia PDF Downloads 542718 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1212717 A Benchmark System for Testing Medium Voltage Direct Current (MVDC-CB) Robustness Utilizing Real Time Digital Simulation and Hardware-In-Loop Theory
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
The integration of green energy resources is a major focus, and the role of Medium Voltage Direct Current (MVDC) systems is exponentially expanding. However, the protection of MVDC systems against DC faults is a challenge that can have consequences on reliable and safe grid operation. This challenge reveals the need for MVDC circuit breakers (MVDC CB), which are in infancies of their improvement. Therefore will be a lack of MVDC CBs standards, including thresholds for acceptable power losses and operation speed. To establish a baseline for comparison purposes, a benchmark system for testing future MVDC CBs is vital. The literatures just give the timing sequence of each switch and the emphasis is on the topology, without in-depth study on the control algorithm of DCCB, as the circuit breaker control system is not yet systematic. A digital testing benchmark is designed for the Proof-of-concept of simulation studies using software models. It can validate studies based on real-time digital simulators and Transient Network Analyzer (TNA) models. The proposed experimental setup utilizes data accusation from the accurate sensors installed on the tested MVDC CB and through general purpose input/outputs (GPIO) from the microcontroller and PC Prototype studies in the laboratory-based models utilizing Hardware-in-the-Loop (HIL) equipment connected to real-time digital simulators is achieved. The improved control algorithm of the circuit breaker can reduce the peak fault current and avoid arc resignation, helping the coordination of DCCB in relay protection. Moreover, several research gaps are identified regarding case studies and evaluation approaches.Keywords: DC circuit breaker, hardware-in-the-loop, real time digital simulation, testing benchmark
Procedia PDF Downloads 78