Search results for: Whale Optimization Algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6093

Search results for: Whale Optimization Algorithm

3903 Classification of IoT Traffic Security Attacks Using Deep Learning

Authors: Anum Ali, Kashaf ad Dooja, Asif Saleem

Abstract:

The future smart cities trend will be towards Internet of Things (IoT); IoT creates dynamic connections in a ubiquitous manner. Smart cities offer ease and flexibility for daily life matters. By using small devices that are connected to cloud servers based on IoT, network traffic between these devices is growing exponentially, whose security is a concerned issue, since ratio of cyber attack may make the network traffic vulnerable. This paper discusses the latest machine learning approaches in related work further to tackle the increasing rate of cyber attacks, machine learning algorithm is applied to IoT-based network traffic data. The proposed algorithm train itself on data and identify different sections of devices interaction by using supervised learning which is considered as a classifier related to a specific IoT device class. The simulation results clearly identify the attacks and produce fewer false detections.

Keywords: IoT, traffic security, deep learning, classification

Procedia PDF Downloads 154
3902 Optimum Parameter of a Viscous Damper for Seismic and Wind Vibration

Authors: Soltani Amir, Hu Jiaxin

Abstract:

Determination of optimal parameters of a passive control system device is the primary objective of this study. Expanding upon the use of control devices in wind and earthquake hazard reduction has led to development of various control systems. The advantage of non-linearity characteristics in a passive control device and the optimal control method using LQR algorithm are explained in this study. Finally, this paper introduces a simple approach to determine optimum parameters of a nonlinear viscous damper for vibration control of structures. A MATLAB program is used to produce the dynamic motion of the structure considering the stiffness matrix of the SDOF frame and the non-linear damping effect. This study concluded that the proposed system (variable damping system) has better performance in system response control than a linear damping system. Also, according to the energy dissipation graph, the total energy loss is greater in non-linear damping system than other systems.

Keywords: passive control system, damping devices, viscous dampers, control algorithm

Procedia PDF Downloads 470
3901 Node Pair Selection Scheme in Relay-Aided Communication Based on Stable Marriage Problem

Authors: Tetsuki Taniguchi, Yoshio Karasawa

Abstract:

This paper describes a node pair selection scheme in relay-aided multiple source multiple destination communication system based on stable marriage problem. A general case is assumed in which all of source, relay and destination nodes are equipped with multiantenna and carry out multistream transmission. Based on several metrics introduced from inter-node channel condition, the preference order is determined about all source-relay and relay-destination relations, and then the node pairs are determined using Gale-Shapley algorithm. The computer simulations show that the effectiveness of node pair selection is larger in multihop communication. Some additional aspects which are different from relay-less case are also investigated.

Keywords: relay, multiple input multiple output (MIMO), multiuser, amplify and forward, stable marriage problem, Gale-Shapley algorithm

Procedia PDF Downloads 397
3900 A Constrained Neural Network Based Variable Neighborhood Search for the Multi-Objective Dynamic Flexible Job Shop Scheduling Problems

Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir

Abstract:

In this paper, a new neural network based variable neighborhood search is proposed for the multi-objective dynamic, flexible job shop scheduling problems. The neural network controls the problems' constraints to prevent infeasible solutions, while the Variable Neighborhood Search (VNS) applies moves, based on the critical block concept to improve the solutions. Two approaches are used for managing the constraints, in the first approach, infeasible solutions are modified according to the constraints, after the moves application, while in the second one, infeasible moves are prevented. Several neighborhood structures from the literature with some modifications, also new structures are used in the VNS. The suggested neighborhoods are more systematically defined and easy to implement. Comparison is done based on a multi-objective flexible job shop scheduling problem that is dynamic because of the jobs different release time and machines breakdowns. The results show that the presented method has better performance than the compared VNSs selected from the literature.

Keywords: constrained optimization, neural network, variable neighborhood search, flexible job shop scheduling, dynamic multi-objective optimization

Procedia PDF Downloads 346
3899 Feasibility Study for Implementation of Geothermal Energy Technology as a Means of Thermal Energy Supply for Medium Size Community Building

Authors: Sreto Boljevic

Abstract:

Heating systems based on geothermal energy sources are becoming increasingly popular among commercial/community buildings as management of these buildings looks for a more efficient and environmentally friendly way to manage the heating system. The thermal energy supply of most European commercial/community buildings at present is provided mainly by energy extracted from natural gas. In order to reduce greenhouse gas emissions and achieve climate change targets set by the EU, restructuring in the area of thermal energy supply is essential. At present, heating and cooling account for approx... 50% of the EU primary energy supply. Due to its physical characteristics, thermal energy cannot be distributed or exchange over long distances, contrary to electricity and gas energy carriers. Compared to electricity and the gas sectors, heating remains a generally black box, with large unknowns to a researcher and policymaker. Ain literature number of documents address policies for promoting renewable energy technology to facilitate heating for residential/community/commercial buildings and assess the balance between heat supply and heat savings. Ground source heat pump (GSHP) technology has been an extremely attractive alternative to traditional electric and fossil fuel space heating equipment used to supply thermal energy for residential/community/commercial buildings. The main purpose of this paper is to create an algorithm using an analytical approach that could enable a feasibility study regarding the implementation of GSHP technology in community building with existing fossil-fueled heating systems. The main results obtained by the algorithm will enable building management and GSHP system designers to define the optimal size of the system regarding technical, environmental, and economic impacts of the system implementation, including payback period time. In addition, an algorithm is created to be utilized for a feasibility study for many different types of buildings. The algorithm is tested on a building that was built in 1930 and is used as a church located in Cork city. The heating of the building is currently provided by a 105kW gas boiler.

Keywords: GSHP, greenhouse gas emission, low-enthalpy, renewable energy

Procedia PDF Downloads 220
3898 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 259
3897 A Vehicle Detection and Speed Measurement Algorithm Based on Magnetic Sensors

Authors: Panagiotis Gkekas, Christos Sougles, Dionysios Kehagias, Dimitrios Tzovaras

Abstract:

Cooperative intelligent transport systems (C-ITS) can greatly improve safety and efficiency in road transport by enabling communication, not only between vehicles themselves but also between vehicles and infrastructure. For that reason, traffic surveillance systems on the road are of great importance. This paper focuses on the development of an on-road unit comprising several magnetic sensors for real-time vehicle detection, movement direction, and speed measurement calculations. Magnetic sensors can feel and measure changes in the earth’s magnetic field. Vehicles are composed of many parts with ferromagnetic properties. Depending on sensors’ sensitivity, changes in the earth’s magnetic field caused by passing vehicles can be detected and analyzed in order to extract information on the properties of moving vehicles. In this paper, we present a prototype algorithm for real-time, high-accuracy, vehicle detection, and speed measurement, which can be implemented as a portable, low-cost, and non-invasive to existing infrastructure solution with the potential to replace existing high-cost implementations. The paper describes the algorithm and presents results from its preliminary lab testing in a close to real condition environment. Acknowledgments: Work presented in this paper was co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation (call RESEARCH–CREATE–INNOVATE) under contract no. Τ1EDK-03081 (project ODOS2020).

Keywords: magnetic sensors, vehicle detection, speed measurement, traffic surveillance system

Procedia PDF Downloads 122
3896 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 41
3895 Adaptive Optimal Controller for Uncertain Inverted Pendulum System: A Dynamic Programming Approach for Continuous Time System

Authors: Dao Phuong Nam, Tran Van Tuyen, Do Trong Tan, Bui Minh Dinh, Nguyen Van Huong

Abstract:

In this paper, we investigate the adaptive optimal control law for continuous-time systems with input disturbances and unknown parameters. This paper extends previous works to obtain the robust control law of uncertain systems. Through theoretical analysis, an adaptive dynamic programming (ADP) based optimal control is proposed to stabilize the closed-loop system and ensure the convergence properties of proposed iterative algorithm. Moreover, the global asymptotic stability (GAS) for closed system is also analyzed. The theoretical analysis for continuous-time systems and simulation results demonstrate the performance of the proposed algorithm for an inverted pendulum system.

Keywords: approximate/adaptive dynamic programming, ADP, adaptive optimal control law, input state stability, ISS, inverted pendulum

Procedia PDF Downloads 195
3894 Aerodynamic Modeling Using Flight Data at High Angle of Attack

Authors: Rakesh Kumar, A. K. Ghosh

Abstract:

The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.

Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling

Procedia PDF Downloads 446
3893 Optimization in the Compressive Strength of Iron Slag Self-Compacting Concrete

Authors: Luis E. Zapata, Sergio Ruiz, María F. Mantilla, Jhon A. Villamizar

Abstract:

Sand as fine aggregate for concrete production needs a feasible substitute due to several environmental issues. In this work, a study of the behavior of self-compacting concrete mixtures under replacement of sand by iron slag from 0.0% to 50.0% of weight and variations of water/cementitious material ratio between 0.3 and 0.5 is presented. Control fresh state tests of Slump flow, T500, J-ring and L-box were determined. In the hardened state, compressive strength was determined and optimization from response surface analysis was performed. The study of the variables in the hardened state was developed based on inferential statistical analyses using central composite design methodology and posterior analyses of variance (ANOVA). An increase in the compressive strength up to 50% higher than control mixtures at 7, 14, and 28 days of maturity was the most relevant result regarding the presence of iron slag as replacement of natural sand. Considering the obtained result, it is possible to infer that iron slag is an acceptable alternative replacement material of the natural fine aggregate to be used in structural concrete.

Keywords: ANOVA, iron slag, response surface analysis, self-compacting concrete

Procedia PDF Downloads 144
3892 Automatic Detection and Classification of Diabetic Retinopathy Using Retinal Fundus Images

Authors: A. Biran, P. Sobhe Bidari, A. Almazroe, V. Lakshminarayanan, K. Raahemifar

Abstract:

Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively.

Keywords: diabetic retinopathy, fundus images, STARE, Gabor filter, support vector machine

Procedia PDF Downloads 294
3891 A Novel Algorithm for Parsing IFC Models

Authors: Raninder Kaur Dhillon, Mayur Jethwa, Hardeep Singh Rai

Abstract:

Information technology has made a pivotal progress across disparate disciplines, one of which is AEC (Architecture, Engineering and Construction) industry. CAD is a form of computer-aided building modulation that architects, engineers and contractors use to create and view two- and three-dimensional models. The AEC industry also uses building information modeling (BIM), a newer computerized modeling system that can create four-dimensional models; this software can greatly increase productivity in the AEC industry. BIM models generate open source IFC (Industry Foundation Classes) files which aim for interoperability for exchanging information throughout the project lifecycle among various disciplines. The methods developed in previous studies require either an IFC schema or MVD and software applications, such as an IFC model server or a Building Information Modeling (BIM) authoring tool, to extract a partial or complete IFC instance model. This paper proposes an efficient algorithm for extracting a partial and total model from an Industry Foundation Classes (IFC) instance model without an IFC schema or a complete IFC model view definition (MVD).

Keywords: BIM, CAD, IFC, MVD

Procedia PDF Downloads 300
3890 Text Analysis to Support Structuring and Modelling a Public Policy Problem-Outline of an Algorithm to Extract Inferences from Textual Data

Authors: Claudia Ehrentraut, Osama Ibrahim, Hercules Dalianis

Abstract:

Policy making situations are real-world problems that exhibit complexity in that they are composed of many interrelated problems and issues. To be effective, policies must holistically address the complexity of the situation rather than propose solutions to single problems. Formulating and understanding the situation and its complex dynamics, therefore, is a key to finding holistic solutions. Analysis of text based information on the policy problem, using Natural Language Processing (NLP) and Text analysis techniques, can support modelling of public policy problem situations in a more objective way based on domain experts knowledge and scientific evidence. The objective behind this study is to support modelling of public policy problem situations, using text analysis of verbal descriptions of the problem. We propose a formal methodology for analysis of qualitative data from multiple information sources on a policy problem to construct a causal diagram of the problem. The analysis process aims at identifying key variables, linking them by cause-effect relationships and mapping that structure into a graphical representation that is adequate for designing action alternatives, i.e., policy options. This study describes the outline of an algorithm used to automate the initial step of a larger methodological approach, which is so far done manually. In this initial step, inferences about key variables and their interrelationships are extracted from textual data to support a better problem structuring. A small prototype for this step is also presented.

Keywords: public policy, problem structuring, qualitative analysis, natural language processing, algorithm, inference extraction

Procedia PDF Downloads 589
3889 Optimization of Monitoring Networks for Air Quality Management in Urban Hotspots

Authors: Vethathirri Ramanujam Srinivasan, S. M. Shiva Nagendra

Abstract:

Air quality management in urban areas is a serious concern in both developed and developing countries. In this regard, more number of air quality monitoring stations are planned to mitigate air pollution in urban areas. In India, Central Pollution Control Board has set up 574 air quality monitoring stations across the country and proposed to set up another 500 stations in the next few years. The number of monitoring stations for each city has been decided based on population data. The setting up of ambient air quality monitoring stations and their operation and maintenance are highly expensive. Therefore, there is a need to optimize monitoring networks for air quality management. The present paper discusses the various methods such as Indian Standards (IS) method, US EPA method and European Union (EU) method to arrive at the minimum number of air quality monitoring stations. In addition, optimization of rain-gauge method and Inverse Distance Weighted (IDW) method using Geographical Information System (GIS) are also explored in the present work for the design of air quality network in Chennai city. In summary, additionally 18 stations are required for Chennai city, and the potential monitoring locations with their corresponding land use patterns are ranked and identified from the 1km x 1km sized grids.

Keywords: air quality monitoring network, inverse distance weighted method, population based method, spatial variation

Procedia PDF Downloads 189
3888 Least-Square Support Vector Machine for Characterization of Clusters of Microcalcifications

Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha

Abstract:

Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.

Keywords: clusters of microcalcifications, ductal carcinoma in situ, least-square support vector machine, particle swarm optimization

Procedia PDF Downloads 354
3887 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem

Authors: Bidzina Matsaberidze

Abstract:

It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.

Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions

Procedia PDF Downloads 92
3886 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems

Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber

Abstract:

Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.

Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement

Procedia PDF Downloads 150
3885 Optimization of Geometric Parameters of Microfluidic Channels for Flow-Based Studies

Authors: Parth Gupta, Ujjawal Singh, Shashank Kumar, Mansi Chandra, Arnab Sarkar

Abstract:

Microfluidic devices have emerged as indispensable tools across various scientific disciplines, offering precise control and manipulation of fluids at the microscale. Their efficacy in flow-based research, spanning engineering, chemistry, and biology, relies heavily on the geometric design of microfluidic channels. This work introduces a novel approach to optimise these channels through Response Surface Methodology (RSM), departing from the conventional practice of addressing one parameter at a time. Traditionally, optimising microfluidic channels involved isolated adjustments to individual parameters, limiting the comprehensive understanding of their combined effects. In contrast, our approach considers the simultaneous impact of multiple parameters, employing RSM to efficiently explore the complex design space. The outcome is an innovative microfluidic channel that consumes an optimal sample volume and minimises flow time, enhancing overall efficiency. The relevance of geometric parameter optimization in microfluidic channels extends significantly in biomedical engineering. The flow characteristics of porous materials within these channels depend on many factors, including fluid viscosity, environmental conditions (such as temperature and humidity), and specific design parameters like sample volume, channel width, channel length, and substrate porosity. This intricate interplay directly influences the performance and efficacy of microfluidic devices, which, if not optimized, can lead to increased costs and errors in disease testing and analysis. In the context of biomedical applications, the proposed approach addresses the critical need for precision in fluid flow. it mitigate manufacturing costs associated with trial-and-error methodologies by optimising multiple geometric parameters concurrently. The resulting microfluidic channels offer enhanced performance and contribute to a streamlined, cost-effective process for testing and analyzing diseases. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing.

Keywords: microfluidic device, minitab, statistical optimization, response surface methodology

Procedia PDF Downloads 68
3884 Modelling Water Usage for Farming

Authors: Ozgu Turgut

Abstract:

Water scarcity is a problem for many regions which requires immediate action, and solutions cannot be postponed for a long time. It is known that farming consumes a significant portion of usable water. Although in recent years, the efforts to make the transition to dripping or spring watering systems instead of using surface watering started to pay off. It is also known that this transition is not necessarily translated into an increase in the capacity dedicated to other water consumption channels such as city water or power usage. In order to control and allocate the water resource more purposefully, new watering systems have to be used with monitoring abilities that can limit the usage capacity for each farm. In this study, a decision support model which relies on a bi-objective stochastic linear optimization is proposed, which takes crop yield and price volatility into account. The model generates annual planting plans as well as water usage limits for each farmer in the region while taking the total value (i.e., profit) of the overall harvest. The mathematical model is solved using the L-shaped method optimally. The decision support model can be especially useful for regional administrations to plan next year's planting and water incomes and expenses. That is why not only a single optimum but also a set of representative solutions from the Pareto set is generated with the proposed approach.

Keywords: decision support, farming, water, tactical planning, optimization, stochastic, pareto

Procedia PDF Downloads 74
3883 Flashover Detection Algorithm Based on Mother Function

Authors: John A. Morales, Guillermo Guidi, B. M. Keune

Abstract:

Electric Power supply is a crucial topic for economic and social development. Power outages statistics show that discharges atmospherics are imperative phenomena to produce those outages. In this context, it is necessary to correctly detect when overhead line insulators are faulted. In this paper, an algorithm to detect if a lightning stroke generates or not permanent fault on insulator strings is proposed. On top of that, lightning stroke simulations developed by using the Alternative Transients Program, are used. Based on these insights, a novel approach is designed that depends on mother functions analysis corresponding to the given variance-covariance matrix. Signals registered at the insulator string are projected on corresponding axes by the means of Principal Component Analysis. By exploiting these new axes, it is possible to determine a flashover characteristic zone useful to a good insulation design. The proposed methodology for flashover detection extends the existing approaches for the analysis and study of lightning performance on transmission lines.

Keywords: mother function, outages, lightning, sensitivity analysis

Procedia PDF Downloads 587
3882 Use of Transportation Networks to Optimize The Profit Dynamics of the Product Distribution

Authors: S. Jayasinghe, R. B. N. Dissanayake

Abstract:

Optimization modelling together with the Network models and Linear Programming techniques is a powerful tool in problem solving and decision making in real world applications. This study developed a mathematical model to optimize the net profit by minimizing the transportation cost. This model focuses the transportation among decentralized production plants to a centralized distribution centre and then the distribution among island wide agencies considering the customer satisfaction as a requirement. This company produces basically 9 types of food items with 82 different varieties and 4 types of non-food items with 34 different varieties. Among 6 production plants, 4 were located near the city of Mawanella and the other 2 were located in Galewala and Anuradhapura cities which are 80 km and 150 km away from Mawanella respectively. The warehouse located in the Mawanella was the main production plant and also the only distribution plant. This plant distributes manufactured products to 39 agencies island-wide. The average values and average amount of the goods for 6 consecutive months from May 2013 to October 2013 were collected and then average demand values were calculated. The following constraints are used as the necessary requirement to satisfy the optimum condition of the model; there was one source, 39 destinations and supply and demand for all the agencies are equal. Using transport cost for a kilometer, total transport cost was calculated. Then the model was formulated using distance and flow of the distribution. Network optimization and linear programming techniques were used to originate the model while excel solver is used in solving. Results showed that company requires total transport cost of Rs. 146, 943, 034.50 to fulfil the customers’ requirement for a month. This is very much less when compared with data without using the model. Model also proved that company can reduce their transportation cost by 6% when distributing to island-wide customers. Company generally satisfies their customers’ requirements by 85%. This satisfaction can be increased up to 97% by using this model. Therefore this model can be used by other similar companies in order to reduce the transportation cost.

Keywords: mathematical model, network optimization, linear programming

Procedia PDF Downloads 346
3881 Detecting Cyberbullying, Spam and Bot Behavior and Fake News in Social Media Accounts Using Machine Learning

Authors: M. D. D. Chathurangi, M. G. K. Nayanathara, K. M. H. M. M. Gunapala, G. M. R. G. Dayananda, Kavinga Yapa Abeywardena, Deemantha Siriwardana

Abstract:

Due to the growing popularity of social media platforms at present, there are various concerns, mostly cyberbullying, spam, bot accounts, and the spread of incorrect information. To develop a risk score calculation system as a thorough method for deciphering and exposing unethical social media profiles, this research explores the most suitable algorithms to our best knowledge in detecting the mentioned concerns. Various multiple models, such as Naïve Bayes, CNN, KNN, Stochastic Gradient Descent, Gradient Boosting Classifier, etc., were examined, and the best results were taken into the development of the risk score system. For cyberbullying, the Logistic Regression algorithm achieved an accuracy of 84.9%, while the spam-detecting MLP model gained 98.02% accuracy. The bot accounts identifying the Random Forest algorithm obtained 91.06% accuracy, and 84% accuracy was acquired for fake news detection using SVM.

Keywords: cyberbullying, spam behavior, bot accounts, fake news, machine learning

Procedia PDF Downloads 36
3880 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest

Authors: Bharatendra Rai

Abstract:

Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).

Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error

Procedia PDF Downloads 323
3879 Molecular Modeling of 17-Picolyl and 17-Picolinylidene Androstane Derivatives with Anticancer Activity

Authors: Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Lidija Jevrić, Evgenija Djurendić, Jovana Ajduković

Abstract:

In the present study, the molecular modeling of a series of 24 17-picolyl and 17-picolinylidene androstane derivatives whit significant anticancer activity was carried out. Modelling of studied compounds was performed by CS ChemBioDraw Ultra v12.0 program for drawing 2D molecular structures and CS ChemBio3D Ultra v12.0 for 3D molecular modelling. The obtained 3D structures were subjected to energy minimization using molecular mechanics force field method (MM2). The cutoff for structure optimization was set at a gradient of 0.1 kcal/Åmol. Full geometry optimization was done by the Austin Model 1 (AM1) until the root mean square (RMS) gradient reached a value smaller than 0.0001 kcal/Åmol using Molecular Orbital Package (MOPAC) program. The obtained physicochemical, lipophilicity and topological descriptors were used for analysis of molecular similarities and dissimilarities applying suitable chemometric methods (principal component analysis and cluster analysis). These results are the part of the project No. 114-451-347/2015-02, financially supported by the Provincial Secretariat for Science and Technological Development of Vojvodina and CMST COST Action CM1306.

Keywords: androstane derivatives, anticancer activity, chemometrics, molecular descriptors

Procedia PDF Downloads 361
3878 Research on Optimization Strategies for the Negative Space of Urban Rail Transit Based on Urban Public Art Planning

Authors: Kexin Chen

Abstract:

As an important method of transportation to solve the demand and supply contradiction generated in the rapid urbanization process, urban rail traffic system has been rapidly developed over the past ten years in China. During the rapid development, the space of urban rail Transit has encountered many problems, such as space simplification, sensory experience dullness, and poor regional identification, etc. This paper, focus on the study of the negative space of subway station and spatial softening, by comparing and learning from foreign cases. The article sorts out cases at home and abroad, make a comparative study of the cases, analysis more diversified setting of public art, and sets forth propositions on the domestic type of public art in the space of urban rail transit for reference, then shows the relationship of the spatial attribute in the space of urban rail transit and public art form. In this foundation, it aims to characterize more diverse setting ways for public art; then suggests the three public art forms corresponding properties, such as static presenting mode, dynamic image mode, and spatial softening mode; finds out the method of urban public art to optimize negative space.

Keywords: diversification, negative space, optimization strategy, public art planning

Procedia PDF Downloads 207
3877 Low-Cost Parking Lot Mapping and Localization for Home Zone Parking Pilot

Authors: Hongbo Zhang, Xinlu Tang, Jiangwei Li, Chi Yan

Abstract:

Home zone parking pilot (HPP) is a fast-growing segment in low-speed autonomous driving applications. It requires the car automatically cruise around a parking lot and park itself in a range of up to 100 meters inside a recurrent home/office parking lot, which requires precise parking lot mapping and localization solution. Although Lidar is ideal for SLAM, the car OEMs favor a low-cost fish-eye camera based visual SLAM approach. Recent approaches have employed segmentation models to extract semantic features and improve mapping accuracy, but these AI models are memory unfriendly and computationally expensive, making deploying on embedded ADAS systems difficult. To address this issue, we proposed a new method that utilizes object detection models to extract robust and accurate parking lot features. The proposed method could reduce computational costs while maintaining high accuracy. Once combined with vehicles’ wheel-pulse information, the system could construct maps and locate the vehicle in real-time. This article will discuss in detail (1) the fish-eye based Around View Monitoring (AVM) with transparent chassis images as the inputs, (2) an Object Detection (OD) based feature point extraction algorithm to generate point cloud, (3) a low computational parking lot mapping algorithm and (4) the real-time localization algorithm. At last, we will demonstrate the experiment results with an embedded ADAS system installed on a real car in the underground parking lot.

Keywords: ADAS, home zone parking pilot, object detection, visual SLAM

Procedia PDF Downloads 67
3876 Optimal Design of RC Pier Accompanied with Multi Sliding Friction Damping Mechanism Using Combination of SNOPT and ANN Method

Authors: Angga S. Fajar, Y. Takahashi, J. Kiyono, S. Sawada

Abstract:

The structural system concept of RC pier accompanied with multi sliding friction damping mechanism was developed based on numerical analysis approach. However in the implementation, to make design for such kind of this structural system consumes a lot of effort in case high of complexity. During making design, the special behaviors of this structural system should be considered including flexible small deformation, sufficient elastic deformation capacity, sufficient lateral force resistance, and sufficient energy dissipation. The confinement distribution of friction devices has significant influence to its. Optimization and prediction with multi function regression of this structural system expected capable of providing easier and simpler design method. The confinement distribution of friction devices is optimized with SNOPT in Opensees, while some design variables of the structure are predicted using multi function regression of ANN. Based on the optimization and prediction this structural system is able to be designed easily and simply.

Keywords: RC Pier, multi sliding friction device, optimal design, flexible small deformation

Procedia PDF Downloads 367
3875 Optimization of Shale Gas Production by Advanced Hydraulic Fracturing

Authors: Fazl Ullah, Rahmat Ullah

Abstract:

This paper shows a comprehensive learning focused on the optimization of gas production in shale gas reservoirs through hydraulic fracturing. Shale gas has emerged as an important unconventional vigor resource, necessitating innovative techniques to enhance its extraction. The key objective of this study is to examine the influence of fracture parameters on reservoir productivity and formulate strategies for production optimization. A sophisticated model integrating gas flow dynamics and real stress considerations is developed for hydraulic fracturing in multi-stage shale gas reservoirs. This model encompasses distinct zones: a single-porosity medium region, a dual-porosity average region, and a hydraulic fracture region. The apparent permeability of the matrix and fracture system is modeled using principles like effective stress mechanics, porous elastic medium theory, fractal dimension evolution, and fluid transport apparatuses. The developed model is then validated using field data from the Barnett and Marcellus formations, enhancing its reliability and accuracy. By solving the partial differential equation by means of COMSOL software, the research yields valuable insights into optimal fracture parameters. The findings reveal the influence of fracture length, diversion capacity, and width on gas production. For reservoirs with higher permeability, extending hydraulic fracture lengths proves beneficial, while complex fracture geometries offer potential for low-permeability reservoirs. Overall, this study contributes to a deeper understanding of hydraulic cracking dynamics in shale gas reservoirs and provides essential guidance for optimizing gas production. The research findings are instrumental for energy industry professionals, researchers, and policymakers alike, shaping the future of sustainable energy extraction from unconventional resources.

Keywords: fluid-solid coupling, apparent permeability, shale gas reservoir, fracture property, numerical simulation

Procedia PDF Downloads 71
3874 Performance Evaluation of Various Segmentation Techniques on MRI of Brain Tissue

Authors: U.V. Suryawanshi, S.S. Chowhan, U.V Kulkarni

Abstract:

Accuracy of segmentation methods is of great importance in brain image analysis. Tissue classification in Magnetic Resonance brain images (MRI) is an important issue in the analysis of several brain dementias. This paper portraits performance of segmentation techniques that are used on Brain MRI. A large variety of algorithms for segmentation of Brain MRI has been developed. The objective of this paper is to perform a segmentation process on MR images of the human brain, using Fuzzy c-means (FCM), Kernel based Fuzzy c-means clustering (KFCM), Spatial Fuzzy c-means (SFCM) and Improved Fuzzy c-means (IFCM). The review covers imaging modalities, MRI and methods for noise reduction and segmentation approaches. All methods are applied on MRI brain images which are degraded by salt-pepper noise demonstrate that the IFCM algorithm performs more robust to noise than the standard FCM algorithm. We conclude with a discussion on the trend of future research in brain segmentation and changing norms in IFCM for better results.

Keywords: image segmentation, preprocessing, MRI, FCM, KFCM, SFCM, IFCM

Procedia PDF Downloads 332