Search results for: expanded invasive weed optimization algorithm (exIWO)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7292

Search results for: expanded invasive weed optimization algorithm (exIWO)

572 Comparison of Direction of Arrival Estimation Method for Drone Based on Phased Microphone Array

Authors: Jiwon Lee, Yeong-Ju Go, Jong-Soo Choi

Abstract:

Drones were first developed for military use and were used in World War 1. But recently drones have been used in a variety of fields. Several companies actively utilize drone technology to strengthen their services, and in agriculture, drones are used for crop monitoring and sowing. Other people use drones for hobby activities such as photography. However, as the range of use of drones expands rapidly, problems caused by drones such as improperly flying, privacy and terrorism are also increasing. As the need for monitoring and tracking of drones increases, researches are progressing accordingly. The drone detection system estimates the position of the drone using the physical phenomena that occur when the drones fly. The drone detection system measures being developed utilize many approaches, such as radar, infrared camera, and acoustic detection systems. Among the various drone detection system, the acoustic detection system is advantageous in that the microphone array system is small, inexpensive, and easy to operate than other systems. In this paper, the acoustic signal is acquired by using minimum microphone when drone is flying, and direction of drone is estimated. When estimating the Direction of Arrival(DOA), there is a method of calculating the DOA based on the Time Difference of Arrival(TDOA) and a method of calculating the DOA based on the beamforming. The TDOA technique requires less number of microphones than the beamforming technique, but is weak in noisy environments and can only estimate the DOA of a single source. The beamforming technique requires more microphones than the TDOA technique. However, it is strong against the noisy environment and it is possible to simultaneously estimate the DOA of several drones. When estimating the DOA using acoustic signals emitted from the drone, it is impossible to measure the position of the drone, and only the direction can be estimated. To overcome this problem, in this work we show how to estimate the position of drones by arranging multiple microphone arrays. The microphone array used in the experiments was four tetrahedral microphones. We simulated the performance of each DOA algorithm and demonstrated the simulation results through experiments.

Keywords: acoustic sensing, direction of arrival, drone detection, microphone array

Procedia PDF Downloads 160
571 A Computational Study of Very High Turbulent Flow and Heat Transfer Characteristics in Circular Duct with Hemispherical Inline Baffles

Authors: Dipak Sen, Rajdeep Ghosh

Abstract:

This paper presents a computational study of steady state three dimensional very high turbulent flow and heat transfer characteristics in a constant temperature-surfaced circular duct fitted with 900 hemispherical inline baffles. The computations are based on realizable k-ɛ model with standard wall function considering the finite volume method, and the SIMPLE algorithm has been implemented. Computational Study are carried out for Reynolds number, Re ranging from 80000 to 120000, Prandtl Number, Pr of 0.73, Pitch Ratios, PR of 1,2,3,4,5 based on the hydraulic diameter of the channel, hydrodynamic entry length, thermal entry length and the test section. Ansys Fluent 15.0 software has been used to solve the flow field. Study reveals that circular pipe having baffles has a higher Nusselt number and friction factor compared to the smooth circular pipe without baffles. Maximum Nusselt number and friction factor are obtained for the PR=5 and PR=1 respectively. Nusselt number increases while pitch ratio increases in the range of study; however, friction factor also decreases up to PR 3 and after which it becomes almost constant up to PR 5. Thermal enhancement factor increases with increasing pitch ratio but with slightly decreasing Reynolds number in the range of study and becomes almost constant at higher Reynolds number. The computational results reveal that optimum thermal enhancement factor of 900 inline hemispherical baffle is about 1.23 for pitch ratio 5 at Reynolds number 120000.It also shows that the optimum pitch ratio for which the baffles can be installed in such very high turbulent flows should be 5. Results show that pitch ratio and Reynolds number play an important role on both fluid flow and heat transfer characteristics.

Keywords: friction factor, heat transfer, turbulent flow, circular duct, baffle, pitch ratio

Procedia PDF Downloads 372
570 Numerical Study of Piled Raft Foundation Under Vertical Static and Seismic Loads

Authors: Hamid Oumer Seid

Abstract:

Piled raft foundation (PRF) is a union of pile and raft working together through the interaction of soil-pile, pile-raft, soil-raft and pile-pile to provide adequate bearing capacity and controlled settlement. A uniform pile positioning is used in PRF; however, there is a wide room for optimization through parametric study under vertical load to result in a safer and economical foundation. Addis Ababa is found in seismic zone 3 with a peak ground acceleration (PGA) above the threshold of damage, which makes investigating the performance of PRF under seismic load considering the dynamic kinematic soil structure interaction (SSI) vital. The study area is located in Addis Ababa around Mexico (commercial bank) and Kirkos (Nib, Zemen and United Bank) in which input parameters (pile length, pile diameter, pile spacing, raft area, raft thickness and load) are taken. A finite difference-based numerical software, FLAC3D V6, was used for the analysis. The Kobe (1995) and Northridge (1994) earthquakes were selected, and deconvolution analysis was done. A close load sharing between pile and raft was achieved at a spacing of 7D with different pile lengths and diameters. The maximum settlement reduction achieved is 9% for a pile of 2m diameter by increasing length from 10m to 20m, which shows pile length is not effective in reducing settlement. The installation of piles results in an increase in the negative bending moment of the raft compared with an unpiled raft. Hence, the optimized design depends on pile spacing and the raft edge length, while pile length and diameter are not significant parameters. An optimized piled raft configuration (𝐴𝐺/𝐴𝑅 = 0.25 at the center and piles provided around the edge) has reduced pile number by 40% and differential settlement by 95%. The dynamic analysis shows acceleration plot at the top of the piled raft has PGA of 0.25𝑚2/𝑠𝑒𝑐 and 0.63𝑚2/𝑠𝑒𝑐 for Northridge (1994) and Kobe (1995) earthquakes, respectively, due to attenuation of seismic waves. Pile head displacement (maximum is 2mm, and it is under the allowable limit) is affected by the PGA rather than the duration of an earthquake. End bearing and friction PRF performed similarly under two different earthquakes except for their vertical settlement considering SSI. Hence, PRF has shown adequate resistance to seismic loads.

Keywords: FLAC3D V6, earthquake, optimized piled raft foundation, pile head department

Procedia PDF Downloads 29
569 Teaching a Senior Design Course in Industrial Engineering

Authors: Mehmet Savsar

Abstract:

Industrial Engineering is one of the engineering disciplines that deal with analysis, design, and improvement of systems, which include manufacturing, supply chain, healthcare, communication, and general service systems. Industrial engineers involve with comprehensive study of a given system, analysis of its interacting units, determination of problem areas, application of various optimization and operations research tools, and recommendation of solutions resulting in significant improvements. The Senior Design course in Industrial Engineering is the culmination of the Industrial Engineering Curriculum in a Capstone Design course, which fundamentally deals with systems analysis and design. The course at Kuwait University has been carefully designed with various course objectives and course outcomes in mind to achieve several program outcomes by practices and learning experiences, which are explicitly gained by systems analysis and design. The Senior Design Course is carried out in a selected industrial or service organization, with support from its engineering personnel, during a full semester by a team of students, who are usually in the last semester of their academic programs. A senior faculty member constantly administers the course to ensure that the students accomplish the prescribed objectives. Students work in groups to formulate issues and propose solutions and communicate, results in formal written and oral presentations. When the course is completed, they emerge as engineers that can be clearly identified as more mature, able to communicate better, able to participate in team work, able to see systems perspective in analysis and design, and more importantly, able to assume responsibility at entry level as engineers. The accomplishments are mainly due to real life experiences gained during the course of their design study. This paper presents methods, procedures, and experiences in teaching a Senior Design Course in Industrial Engineering Curriculum. A detailed description of the course, its role, its objectives, outcomes, learning practices, and assessments are explained in relation to other courses in Industrial Engineering Curriculum. The administration of the course, selected organizations where the course project is carried out, problems and solution tools utilized, student accomplishments and obstacles faced are presented. Issues discussed in this paper could help instructors in teaching the course as well as in clarifying the contribution of a design course to the industrial engineering education in general. In addition, the methods and teaching procedures presented could facilitate future improvements in industrial engineering curriculum.

Keywords: senior design course, industrial engineering, capstone design, education

Procedia PDF Downloads 135
568 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death

Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar

Abstract:

In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.

Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death

Procedia PDF Downloads 342
567 Evaluation of NoSQL in the Energy Marketplace with GraphQL Optimization

Authors: Michael Howard

Abstract:

The growing popularity of electric vehicles in the United States requires an ever-expanding infrastructure of commercial DC fast charging stations. The U.S. Department of Energy estimates 33,355 publicly available DC fast charging stations as of September 2023. In 2017, 115,370 gasoline stations were operating in the United States, much more ubiquitous than DC fast chargers. Range anxiety is an important impediment to the adoption of electric vehicles and is even more relevant in underserved regions in the country. The peer-to-peer energy marketplace helps fill the demand by allowing private home and small business owners to rent their 240 Volt, level-2 charging facilities. The existing, publicly accessible outlets are wrapped with a Cloud-connected microcontroller managing security and charging sessions. These microcontrollers act as Edge devices communicating with a Cloud message broker, while both buyer and seller users interact with the framework via a web-based user interface. The database storage used by the marketplace framework is a key component in both the cost of development and the performance that contributes to the user experience. A traditional storage solution is the SQL database. The architecture and query language have been in existence since the 1970s and are well understood and documented. The Structured Query Language supported by the query engine provides fine granularity with user query conditions. However, difficulty in scaling across multiple nodes and cost of its server-based compute have resulted in a trend in the last 20 years towards other NoSQL, serverless approaches. In this study, we evaluate the NoSQL vs. SQL solutions through a comparison of Google Cloud Firestore and Cloud SQL MySQL offerings. The comparison pits Google's serverless, document-model, non-relational, NoSQL against the server-base, table-model, relational, SQL service. The evaluation is based on query latency, flexibility/scalability, and cost criteria. Through benchmarking and analysis of the architecture, we determine whether Firestore can support the energy marketplace storage needs and if the introduction of a GraphQL middleware layer can overcome its deficiencies.

Keywords: non-relational, relational, MySQL, mitigate, Firestore, SQL, NoSQL, serverless, database, GraphQL

Procedia PDF Downloads 63
566 Suitability Evaluation of Human Settlements Using a Global Sensitivity Analysis Method: A Case Study in of China

Authors: Feifei Wu, Pius Babuna, Xiaohua Yang

Abstract:

The suitability evaluation of human settlements over time and space is essential to track potential challenges towards suitable human settlements and provide references for policy-makers. This study established a theoretical framework of human settlements based on the nature, human, economy, society and residence subsystems. Evaluation indicators were determined with the consideration of the coupling effect among subsystems. Based on the extended Fourier amplitude sensitivity test algorithm, the global sensitivity analysis that considered the coupling effect among indicators was used to determine the weights of indicators. The human settlement suitability was evaluated at both subsystems and comprehensive system levels in 30 provinces of China between 2000 and 2016. The findings were as follows: (1) human settlements suitability index (HSSI) values increased significantly in all 30 provinces from 2000 to 2016. Among the five subsystems, the suitability index of the residence subsystem in China exhibited the fastest growinggrowth, fol-lowed by the society and economy subsystems. (2) HSSI in eastern provinces with a developed economy was higher than that in western provinces with an underdeveloped economy. In con-trast, the growing rate of HSSI in eastern provinces was significantly higher than that in western provinces. (3) The inter-provincial difference of in HSSI decreased from 2000 to 2016. For sub-systems, it decreased for the residence system, whereas it increased for the economy system. (4) The suitability of the natural subsystem has become a limiting factor for the improvement of human settlements suitability, especially in economically developed provinces such as Beijing, Shanghai, and Guangdong. The results can be helpful to support decision-making and policy for improving the quality of human settlements in a broad nature, human, economy, society and residence context.

Keywords: human settlements, suitability evaluation, extended fourier amplitude, human settlement suitability

Procedia PDF Downloads 82
565 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images

Authors: Xiang Shijie, Zhou Dong, Tian Dan

Abstract:

This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.

Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition

Procedia PDF Downloads 25
564 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations

Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang

Abstract:

Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.

Keywords: source identification, ordinary differential equations, label propagation, complex networks

Procedia PDF Downloads 22
563 Application of Artificial Intelligence in Market and Sales Network Management: Opportunities, Benefits, and Challenges

Authors: Mohamad Mahdi Namdari

Abstract:

In today's rapidly changing and evolving business competition, companies and organizations require advanced and efficient tools to manage their markets and sales networks. Big data analysis, quick response in competitive markets, process and operations optimization, and forecasting customer behavior are among the concerns of executive managers. Artificial intelligence, as one of the emerging technologies, has provided extensive capabilities in this regard. The use of artificial intelligence in market and sales network management can lead to improved efficiency, increased decision-making accuracy, and enhanced customer satisfaction. Specifically, AI algorithms can analyze vast amounts of data, identify complex patterns, and offer strategic suggestions to improve sales performance. However, many companies are still distant from effectively leveraging this technology, and those that do face challenges in fully exploiting AI's potential in market and sales network management. It appears that the general public's and even the managerial and academic communities' lack of knowledge of this technology has caused the managerial structure to lag behind the progress and development of artificial intelligence. Additionally, high costs, fear of change and employee resistance, lack of quality data production processes, the need for updating structures and processes, implementation issues, the need for specialized skills and technical equipment, and ethical and privacy concerns are among the factors preventing widespread use of this technology in organizations. Clarifying and explaining this technology, especially to the academic, managerial, and elite communities, can pave the way for a transformative beginning. The aim of this research is to elucidate the capacities of artificial intelligence in market and sales network management, identify its opportunities and benefits, and examine the existing challenges and obstacles. This research aims to leverage AI capabilities to provide a framework for enhancing market and sales network performance for managers. The results of this research can help managers and decision-makers adopt more effective strategies for business growth and development by better understanding the capabilities and limitations of artificial intelligence.

Keywords: artificial intelligence, market management, sales network, big data analysis, decision-making, digital marketing

Procedia PDF Downloads 45
562 Coupled Space and Time Homogenization of Viscoelastic-Viscoplastic Composites

Authors: Sarra Haouala, Issam Doghri

Abstract:

In this work, a multiscale computational strategy is proposed for the analysis of structures, which are described at a refined level both in space and in time. The proposal is applied to two-phase viscoelastic-viscoplastic (VE-VP) reinforced thermoplastics subjected to large numbers of cycles. The main aim is to predict the effective long time response while reducing the computational cost considerably. The proposed computational framework is a combination of the mean-field space homogenization based on the generalized incrementally affine formulation for VE-VP composites, and the asymptotic time homogenization approach for coupled isotropic VE-VP homogeneous solids under large numbers of cycles. The time homogenization method is based on the definition of micro and macro-chronological time scales, and on asymptotic expansions of the unknown variables. First, the original anisotropic VE-VP initial-boundary value problem of the composite material is decomposed into coupled micro-chronological (fast time scale) and macro-chronological (slow time-scale) problems. The former is purely VE, and solved once for each macro time step, whereas the latter problem is nonlinear and solved iteratively using fully implicit time integration. Second, mean-field space homogenization is used for both micro and macro-chronological problems to determine the micro and macro-chronological effective behavior of the composite material. The response of the matrix material is VE-VP with J2 flow theory assuming small strains. The formulation exploits the return-mapping algorithm for the J2 model, with its two steps: viscoelastic predictor and plastic corrections. The proposal is implemented for an extended Mori-Tanaka scheme, and verified against finite element simulations of representative volume elements, for a number of polymer composite materials subjected to large numbers of cycles.

Keywords: asymptotic expansions, cyclic loadings, inclusion-reinforced thermoplastics, mean-field homogenization, time homogenization

Procedia PDF Downloads 371
561 Control of Biofilm Formation and Inorganic Particle Accumulation on Reverse Osmosis Membrane by Hypochlorite Washing

Authors: Masaki Ohno, Cervinia Manalo, Tetsuji Okuda, Satoshi Nakai, Wataru Nishijima

Abstract:

Reverse osmosis (RO) membranes have been widely used for desalination to purify water for drinking and other purposes. Although at present most RO membranes have no resistance to chlorine, chlorine-resistant membranes are being developed. Therefore, direct chlorine treatment or chlorine washing will be an option in preventing biofouling on chlorine-resistant membranes. Furthermore, if particle accumulation control is possible by using chlorine washing, expensive pretreatment for particle removal can be removed or simplified. The objective of this study was to determine the effective hypochlorite washing condition required for controlling biofilm formation and inorganic particle accumulation on RO membrane in a continuous flow channel with RO membrane and spacer. In this study, direct chlorine washing was done by soaking fouled RO membranes in hypochlorite solution and fluorescence intensity was used to quantify biofilm on the membrane surface. After 48 h of soaking the membranes in high fouling potential waters, the fluorescence intensity decreased to 0 from 470 using the following washing conditions: 10 mg/L chlorine concentration, 2 times/d washing interval, and 30 min washing time. The chlorine concentration required to control biofilm formation decreased as the chlorine concentration (0.5–10 mg/L), the washing interval (1–4 times/d), or the washing time (1–30 min) increased. For the sample solutions used in the study, 10 mg/L chlorine concentration with 2 times/d interval, and 5 min washing time was required for biofilm control. The optimum chlorine washing conditions obtained from soaking experiments proved to be applicable also in controlling biofilm formation in continuous flow experiments. Moreover, chlorine washing employed in controlling biofilm with suspended particles resulted in lower amounts of organic (0.03 mg/cm2) and inorganic (0.14 mg/cm2) deposits on the membrane than that for sample water without chlorine washing (0.14 mg/cm2 and 0.33 mg/cm2, respectively). The amount of biofilm formed was 79% controlled by continuous washing with 10 mg/L of free chlorine concentration, and the inorganic accumulation amount decreased by 58% to levels similar to that of pure water with kaolin (0.17 mg/cm2) as feed water. These results confirmed the acceleration of particle accumulation due to biofilm formation, and that the inhibition of biofilm growth can almost completely reduce further particle accumulation. In addition, effective hypochlorite washing condition which can control both biofilm formation and particle accumulation could be achieved.

Keywords: reverse osmosis, washing condition optimization, hypochlorous acid, biofouling control

Procedia PDF Downloads 352
560 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 258
559 Digital Transformation and Digitalization of Public Administration

Authors: Govind Kumar

Abstract:

The concept of ‘e-governance’ that was brought about by the new wave of reforms, namely ‘LPG’ in the early 1990s, has been enabling governments across the globe to digitally transform themselves. Digital transformation is leading the governments with qualitative decisions, optimization in rational use of resources, facilitation of cost-benefit analyses, and elimination of redundancy and corruption with the help of ICT-based applications interface. ICT-based applications/technologies have enormous potential for impacting positive change in the social lives of the global citizenry. Supercomputers test and analyze millions of drug molecules for developing candidate vaccines to combat the global pandemic. Further, e-commerce portals help distribute and supply household items and medicines, while videoconferencing tools provide a visual interface between the clients and hosts. Besides, crop yields are being maximized with the help of drones and machine learning, whereas satellite data, artificial intelligence, and cloud computing help governments with the detection of illegal mining, tackling deforestation, and managing freshwater resources. Such e-applications have the potential to take governance an extra mile by achieving 5 Es (effective, efficient, easy, empower, and equity) of e-governance and six Rs (reduce, reuse, recycle, recover, redesign and remanufacture) of sustainable development. If such digital transformation gains traction within the government framework, it will replace the traditional administration with the digitalization of public administration. On the other hand, it has brought in a new set of challenges, like the digital divide, e-illiteracy, technological divide, etc., and problems like handling e-waste, technological obsolescence, cyber terrorism, e-fraud, hacking, phishing, etc. before the governments. Therefore, it would be essential to bring in a rightful mixture of technological and humanistic interventions for addressing the above issues. This is on account of the reason that technology lacks an emotional quotient, and the administration does not work like technology. Both are self-effacing unless a blend of technology and a humane face are brought in into the administration. The paper will empirically analyze the significance of the technological framework of digital transformation within the government set up for the digitalization of public administration on the basis of the synthesis of two case studies undertaken from two diverse fields of administration and present a future framework of the study.

Keywords: digital transformation, electronic governance, public administration, knowledge framework

Procedia PDF Downloads 101
558 Optimizing Electric Vehicle Charging Networks with Dynamic Pricing and Demand Elasticity

Authors: Chiao-Yi Chen, Dung-Ying Lin

Abstract:

With the growing awareness of environmental protection and the implementation of government carbon reduction policies, the number of electric vehicles (EVs) has rapidly increased, leading to a surge in charging demand and imposing significant challenges on the existing power grid’s capacity. Traditional urban power grid planning has not adequately accounted for the additional load generated by EV charging, which often strains the infrastructure. This study aims to optimize grid operation and load management by dynamically adjusting EV charging prices based on real-time electricity supply and demand, leveraging consumer demand elasticity to enhance system efficiency. This study uniquely addresses the intricate interplay between urban traffic patterns and power grid dynamics in the context of electric vehicle (EV) adoption. By integrating Hsinchu City's road network with the IEEE 33-bus system, the research creates a comprehensive model that captures both the spatial and temporal aspects of EV charging demand. This approach allows for a nuanced analysis of how traffic flow directly influences the load distribution across the power grid. The strategic placement of charging stations at key nodes within the IEEE 33-bus system, informed by actual road traffic data, enables a realistic simulation of the dynamic relationship between vehicle movement and energy consumption. This integration of transportation and energy systems provides a holistic view of the challenges and opportunities in urban EV infrastructure planning, highlighting the critical need for solutions that can adapt to the ever-changing interplay between traffic patterns and grid capacity. The proposed dynamic pricing strategy effectively reduces peak charging loads, enhances the operational efficiency of charging stations, and maximizes operator profits, all while ensuring grid stability. These findings provide practical insights and a valuable framework for optimizing EV charging infrastructure and policies in future smart cities, contributing to more resilient and sustainable urban energy systems.

Keywords: dynamic pricing, demand elasticity, EV charging, grid load balancing, optimization

Procedia PDF Downloads 23
557 Infrared Spectroscopy in Tandem with Machine Learning for Simultaneous Rapid Identification of Bacteria Isolated Directly from Patients' Urine Samples and Determination of Their Susceptibility to Antibiotics

Authors: Mahmoud Huleihel, George Abu-Aqil, Manal Suleiman, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman

Abstract:

Urinary tract infections (UTIs) are considered to be the most common bacterial infections worldwide, which are caused mainly by Escherichia (E.) coli (about 80%). Klebsiella pneumoniae (about 10%) and Pseudomonas aeruginosa (about 6%). Although antibiotics are considered as the most effective treatment for bacterial infectious diseases, unfortunately, most of the bacteria already have developed resistance to the majority of the commonly available antibiotics. Therefore, it is crucial to identify the infecting bacteria and to determine its susceptibility to antibiotics for prescribing effective treatment. Classical methods are time consuming, require ~48 hours for determining bacterial susceptibility. Thus, it is highly urgent to develop a new method that can significantly reduce the time required for determining both infecting bacterium at the species level and diagnose its susceptibility to antibiotics. Fourier-Transform Infrared (FTIR) spectroscopy is well known as a sensitive and rapid method, which can detect minor molecular changes in bacterial genome associated with the development of resistance to antibiotics. The main goal of this study is to examine the potential of FTIR spectroscopy, in tandem with machine learning algorithms, to identify the infected bacteria at the species level and to determine E. coli susceptibility to different antibiotics directly from patients' urine in about 30minutes. For this goal, 1600 different E. coli isolates were isolated for different patients' urine sample, measured by FTIR, and analyzed using different machine learning algorithm like Random Forest, XGBoost, and CNN. We achieved 98% success in isolate level identification and 89% accuracy in susceptibility determination.

Keywords: urinary tract infections (UTIs), E. coli, Klebsiella pneumonia, Pseudomonas aeruginosa, bacterial, susceptibility to antibiotics, infrared microscopy, machine learning

Procedia PDF Downloads 170
556 Extraction of Nutraceutical Bioactive Compounds from the Native Algae Using Solvents with a Deep Natural Eutectic Point and Ultrasonic-assisted Extraction

Authors: Seyedeh Bahar Hashemi, Alireza Rahimi, Mehdi Arjmand

Abstract:

Food is the source of energy and growth through the breakdown of its vital components and plays a vital role in human health and nutrition. Many natural compounds found in plant and animal materials play a special role in biological systems and the origin of many such compounds directly or indirectly is algae. Algae is an enormous source of polysaccharides and have gained much interest in human flourishing. In this study, algae biomass extraction is conducted using deep eutectic-based solvents (NADES) and Ultrasound-assisted extraction (UAE). The aim of this research is to extract bioactive compounds including total carotenoid, antioxidant activity, and polyphenolic contents. For this purpose, the influence of three important extraction parameters namely, biomass-to-solvent ratio, temperature, and time are studied with respect to their impact on the recovery of carotenoids, and phenolics, and on the extracts’ antioxidant activity. Here we employ the Response Surface Methodology for the process optimization. The influence of the independent parameters on each dependent is determined through Analysis of Variance. Our results show that Ultrasound-assisted extraction (UAE) for 50 min is the best extraction condition, and proline:lactic acid (1:1) and choline chloride:urea (1:2) extracts show the highest total phenolic contents (50.00 ± 0.70 mgGAE/gdw) and antioxidant activity [60.00 ± 1.70 mgTE/gdw, 70.00 ± 0.90 mgTE/gdw in 2.2-diphenyl-1-picrylhydrazyl (DPPH), and 2.2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS)]. Our results confirm that the combination of UAE and NADES provides an excellent alternative to organic solvents for sustainable and green extraction and has huge potential for use in industrial applications involving the extraction of bioactive compounds from algae. This study is among the first attempts to optimize the effects of ultrasonic-assisted extraction, ultrasonic devices, and deep natural eutectic point and investigate their application in bioactive compounds extraction from algae. We also study the future perspective of ultrasound technology which helps to understand the complex mechanism of ultrasonic-assisted extraction and further guide its application in algae.

Keywords: natural deep eutectic solvents, ultrasound-assisted extraction, algae, antioxidant activity, phenolic compounds, carotenoids

Procedia PDF Downloads 182
555 Circadian Rhythmic Expression of Choroid Plexus Membrane Transport Proteins

Authors: Rafael Mineiro, André Furtado, Isabel Gonçalves, Cecília Santos, Telma Quintela

Abstract:

The choroid plexus (CP) epithelial cells form the blood-cerebrospinal fluid barrier. This barrier is highly important for brain protection by physically separating the blood from the cerebrospinal fluid, controlling the trafficking of molecules, including therapeutic drugs, from blood to the brain. The control is achieved by tight junctions between epithelial cells, membrane receptors and transport proteins from the solute carrier and ATP-binding cassette superfamily on the choroid plexus epithelial cells membrane. Previous research of our group showed a functional molecular clock in the CP. The key findings included a rhythmic expression of Bmal1, Per2, and Cry2 in female rat CP. and a rhythmic expression of Cry2 and Per2 in male rat CP. Furthermore, in cultured rat CP epithelial cells we already showed that 17β-estradiol upregulates the expression of Bmal1 and Per1, where the Per1 and Per2 upregulation was abrogated in the presence of the estrogen receptors antagonist ICI. These findings, together with the fact that the CP produces robust rhythms, prompt us to understand the impact of sex hormones and circadian rhythms in CP drug transporters expression, which is a step towards the development and optimization of therapeutic strategies for efficiently delivering drugs to the brain. For that, we analyzed the circadian rhythmicity of the Abcb1, Abcc2, Abcc4 Abcg2, and Oat3 drug transporters at the CP of male and female rats. This analysis was performed by accessing the gene expression of the mentioned transporters at 4 time points by RT-qPCR and the presence of rhythms was evaluated by the CircWave software. Our findings showed a rhythmic expression of Abcc1 in the CP of male rats, of Abcg2 in female rats, and of Abcc4 and Oat3 in both male and female rats with an almost antiphasic pattern between male and female rats for Abcc4. In conclusion, these findings translated to a functional point of view may account for daily variations in brain permeability for several therapeutic drugs, making our findings important data for the future establishment and development of therapeutic strategies according to daytime.

Keywords: choroid plexus, circadian rhythm, membrane transporters, sex hormones

Procedia PDF Downloads 16
554 Design of the Ice Rink of the Future

Authors: Carine Muster, Prina Howald Erika

Abstract:

Today's ice rinks are important energy consumers for the production and maintenance of ice. At the same time, users demand that the other rooms should be tempered or heated. The building complex must equally provide cooled and heated zones, which does not translate as carbon-zero ice rinks. The study provides an analysis of how the civil engineering sector can significantly impact minimizing greenhouse gas emissions and optimizing synergies across an entire ice rink complex. The analysis focused on three distinct aspects: the layout, including the volumetric layout of the premises present in an ice rink; the materials chosen that can potentially use the most ecological structural approach; and the construction methods based on innovative solutions to reduce carbon footprint. The first aspect shows that the organization of the interior volumes and defining the shape of the rink play a significant role. Its layout makes the use and operation of the premises as efficient as possible, thanks to the differentiation between heated and cooled volumes while optimising heat loss between the different rooms. The sprayed concrete method, which is still little known, proves that it is possible to achieve the strength of traditional concrete for the structural aspect of the load-bearing and non-load-bearing walls of the ice rink by using materials excavated from the construction site and providing a more ecological and sustainable solution. The installation of an empty sanitary space underneath the ice floor, making it independent of the rest of the structure, provides a natural insulating layer, preventing the transfer of cold to the rest of the structure and reducing energy losses. The addition of active pipes as part of the foundation of the ice floor, coupled with a suitable system, gives warmth in the winter and storage in the summer; this is all possible thanks to the natural heat in the ground. In conclusion, this study provides construction recommendations for future ice rinks with a significantly reduced energy demand, using some simple preliminary design concepts. By optimizing the layout, materials, and construction methods of ice rinks, the civil engineering sector can play a key role in reducing greenhouse gas emissions and promoting sustainability.

Keywords: climate change, energy optimization, green building, sustainability

Procedia PDF Downloads 68
553 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering

Authors: Emiel Caron

Abstract:

Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.

Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics

Procedia PDF Downloads 194
552 Seismic Retrofit of Tall Building Structure with Viscous, Visco-Elastic, Visco-Plastic Damper

Authors: Nicolas Bae, Theodore L. Karavasilis

Abstract:

Increasingly, a large number of new and existing tall buildings are required to improve their resilient performance against strong winds and earthquakes to minimize direct, as well as indirect damages to society. Those advent stationary functions of tall building structures in metropolitan regions can be severely hazardous, in socio-economic terms, which also increase the requirement of advanced seismic performance. To achieve these progressive requirements, the seismic reinforcement for some old, conventional buildings have become enormously costly. The methods of increasing the buildings’ resilience against wind or earthquake loads have also become more advanced. Up to now, vibration control devices, such as the passive damper system, is still regarded as an effective and an easy-to-install option, in improving the seismic resilience of buildings at affordable prices. The main purpose of this paper is to examine 1) the optimization of the shape of visco plastic brace damper (VPBD) system which is one of hybrid damper system so that it can maximize its energy dissipation capacity in tall buildings against wind and earthquake. 2) the verification of the seismic performance of the visco plastic brace damper system in tall buildings; up to forty-storey high steel frame buildings, by comparing the results of Non-Linear Response History Analysis (NLRHA), with and without a damper system. The most significant contribution of this research is to introduce the optimized hybrid damper system that is adequate for high rise buildings. The efficiency of this visco plastic brace damper system and the advantages of its use in tall buildings can be verified since tall buildings tend to be affected by wind load at its normal state and also by earthquake load after yielding of steel plates. The modeling of the prototype tall building will be conducted using the Opensees software. Three types of modeling were used to verify the performance of the damper (MRF, MRF with visco-elastic, MRF with visco-plastic model) 22-set seismic records used and the scaling procedure was followed according to the FEMA code. It is shown that MRF with viscous, visco-elastic damper, it is superior effective to reduce inelastic deformation such as roof displacement, maximum story drift, roof velocity compared to the MRF only.

Keywords: tall steel building, seismic retrofit, viscous, viscoelastic damper, performance based design, resilience based design

Procedia PDF Downloads 193
551 Entry, Descent and Landing System Design and Analysis of a Small Platform in Mars Environment

Authors: Daniele Calvi, Loris Franchi, Sabrina Corpino

Abstract:

Thanks to the latest Mars mission, the planetary exploration has made enormous strides over the past ten years increasing the interest of the scientific community and beyond. These missions aim to fulfill many complex operations which are of paramount importance to mission success. Among these, a special mention goes to the Entry, Descent and Landing (EDL) functions which require a dedicated system to overcome all the obstacles of these critical phases. The general objective of the system is to safely bring the spacecraft from orbital conditions to rest on the planet surface, following the designed mission profile. For this reason, this work aims to develop a simulation tool integrating the re-entry trajectory algorithm in order to support the EDL design during the preliminary phase of the mission. This tool was used on a reference unmanned mission, whose objective is finding bio-evidence and bio-hazards on Martian (sub)surface in order to support the future manned mission. Regarding the concept of operations (CONOPS) of the mission, it concerns the use of Space Penetrator Systems (SPS) that will descend on Mars surface following a ballistic fall and will penetrate the ground after the impact with the surface (around 50 and 300 cm of depth). Each SPS shall contain all the instrumentation required to sample and make the required analyses. Respecting the low-cost and low-mass requirements, as result of the tool, an Entry Descent and Impact (EDI) system based on inflatable structure has been designed. Hence, a solution could be the one chosen by Finnish Meteorological Institute in the Mars Met-Net mission, using an inflatable Thermal Protection System (TPS) called Inflatable Braking Unit (IBU) and an additional inflatable decelerator. Consequently, there are three configurations during the EDI: at altitude of 125 km the IBU is inflated at speed 5.5 km/s; at altitude of 16 km the IBU is jettisoned and an Additional Inflatable Braking Unit (AIBU) is inflated; Lastly at about 13 km, the SPS is ejected from AIBU and it impacts on the Martian surface. Since all parameters are evaluated, it is possible to confirm that the chosen EDI system and strategy verify the requirements of the mission.

Keywords: EDL, Mars, mission, SPS, TPS

Procedia PDF Downloads 169
550 Flow Reproduction Using Vortex Particle Methods for Wake Buffeting Analysis of Bluff Structures

Authors: Samir Chawdhury, Guido Morgenthal

Abstract:

The paper presents a novel extension of Vortex Particle Methods (VPM) where the study aims to reproduce a template simulation of complex flow field that is generated from impulsively started flow past an upstream bluff body at certain Reynolds number Re-Vibration of a structural system under upstream wake flow is often considered its governing design criteria. Therefore, the attention is given in this study especially for the reproduction of wake flow simulation. The basic methodology for the implementation of the flow reproduction requires the downstream velocity sampling from the template flow simulation; therefore, at particular distances from the upstream section the instantaneous velocity components are sampled using a series of square sampling-cells arranged vertically where each of the cell contains four velocity sampling points at its corner. Since the grid free Lagrangian VPM algorithm discretises vorticity on particle elements, the method requires transformation of the velocity components into vortex circulation, and finally the simulation of the reproduction of the template flow field by seeding these vortex circulations or particles into a free stream flow. It is noteworthy that the vortex particles have to be released into the free stream exactly at same rate of velocity sampling. Studies have been done, specifically, in terms of different sampling rates and velocity sampling positions to find their effects on flow reproduction quality. The quality assessments are mainly done, using a downstream flow monitoring profile, by comparing the characteristic wind flow profiles using several statistical turbulence measures. Additionally, the comparisons are performed using velocity time histories, snapshots of the flow fields, and the vibration of a downstream bluff section by performing wake buffeting analyses of the section under the original and reproduced wake flows. Convergence study is performed for the validation of the method. The study also describes the possibilities how to achieve flow reproductions with less computational effort.

Keywords: vortex particle method, wake flow, flow reproduction, wake buffeting analysis

Procedia PDF Downloads 312
549 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification

Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran

Abstract:

The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.

Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM

Procedia PDF Downloads 251
548 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)

Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin

Abstract:

The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.

Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory

Procedia PDF Downloads 109
547 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data

Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple

Abstract:

In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.

Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network

Procedia PDF Downloads 140
546 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 55
545 Spatial Suitability Assessment of Onshore Wind Systems Using the Analytic Hierarchy Process

Authors: Ayat-Allah Bouramdane

Abstract:

Since 2010, there have been sustained decreases in the unit costs of onshore wind energy and large increases in its deployment, varying widely across regions. In fact, the onshore wind production is affected by air density— because cold air is more dense and therefore more effective at producing wind power— and by wind speed—as wind turbines cannot operate in very low or extreme stormy winds. The wind speed is essentially affected by the surface friction or the roughness and other topographic features of the land, which slow down winds significantly over the continent. Hence, the identification of the most appropriate locations of onshore wind systems is crucial to maximize their energy output and therefore minimize their Levelized Cost of Electricity (LCOE). This study focuses on the preliminary assessment of onshore wind energy potential, in several areas in Morocco with a particular focus on the Dakhla city, by analyzing the diurnal and seasonal variability of wind speed for different hub heights, the frequency distribution of wind speed, the wind rose and the wind performance indicators such as wind power density, capacity factor, and LCOE. In addition to climate criterion, other criteria (i.e., topography, location, environment) were selected fromGeographic Referenced Information (GRI), reflecting different considerations. The impact of each criterion on the suitability map of onshore wind farms was identified using the Analytic Hierarchy Process (AHP). We find that the majority of suitable zones are located along the Atlantic Ocean and the Mediterranean Sea. We discuss the sensitivity of the onshore wind site suitability to different aspects such as the methodology—by comparing the Multi-Criteria Decision-Making (MCDM)-AHP results to the Mean-Variance Portfolio optimization framework—and the potential impact of climate change on this suitability map, and provide the final recommendations to the Moroccan energy strategy by analyzing if the actual Morocco's onshore wind installations are located within areas deemed suitable. This analysis may serve as a decision-making framework for cost-effective investment in onshore wind power in Morocco and to shape the future sustainable development of the Dakhla city.

Keywords: analytic hierarchy process (ahp), dakhla, geographic referenced information, morocco, multi-criteria decision-making, onshore wind, site suitability.

Procedia PDF Downloads 172
544 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 158
543 An investigation of the High-frequency Isolation Performance of Quasi-Zero-Stiffness Vibration Isolators

Authors: Chen Zhang, Yongpeng Gu, Xiaotian Li

Abstract:

Quasi-zero-stiffness (QZS) vibration isolation technology has garnered significant attention in both academia and industry, which enables ultra-low-frequency vibration isolation. In modern industries, such as shipbuilding and aerospace, rotating machinery generates vibrations over a wide frequency range, thus imposing more stringent requirements on vibration isolation technologies. These technologies must not only achieve ultra-low starting isolation frequencies but also provide effective isolation across mid- to high-frequency ranges. However, existing research on QZS vibration isolators primarily focuses on frequency ranges below 50 Hz. Moreover, studies have shown that in the mid-to high-frequency ranges, QZS isolators tend to generate resonance peaks that adversely affect their isolation performance. This limitation significantly restricts the practical applicability of QZS isolation technology. To address this issue, the present study investigates the high-frequency isolation performance of two typical QZS isolators: the mechanism type three-spring QZS isolator mechanism and the structure and bowl-shaped QZS isolator structure. First, the parameter conditions required to achieve quasi-zero stiffness characteristics for two isolators are derived based on static mechanical analysis. The theoretical transmissibility characteristics are then calculated using the harmonic balance method. Three-dimensional finite element models of two QZS isolators are developed using ABAQUS simulation software, and transmissibility curves are computed for the 0-500 Hz frequency range. The results indicate that the three-spring QZS mechanism exhibits multiple higher-order resonance peaks in the mid-to high-frequency ranges due to the higher-order models of the springs. Springs with fewer coils and larger diameters can shift the higher-order modals to higher frequencies but cannot entirely eliminate their occurrence. In contrast, the bowl-shaped QZS isolator, through shape optimization using a spline-based representation, effectively mitigates the generation of higher-order resonance modes, resulting in superior isolation performance in the mid-to high-frequency ranges. This study provides essential theoretical insights for optimizing the vibration isolation performance of QZS technologies in complex, wide-frequency vibration environments, offering significant practical value for their application.

Keywords: quasi-zero-stiffness, wide-frequency vibration, vibration isolator, transmissibility

Procedia PDF Downloads 12