Search results for: optimization methods
15519 Resource Allocation Modeling and Simulation in Border Security Application
Authors: Kai Jin, Hua Li, Qing Song
Abstract:
Homeland security and border safety is an issue for any country. This paper takes the border security of US as an example to discuss the usage and efficiency of simulation tools in the homeland security application. In this study, available resources and different illegal infiltration parameters are defined, including their individual behavior and objective, in order to develop a model that describes border patrol system. A simulation model is created in Arena. This simulation model is used to study the dynamic activities in the border security. Possible factors that may affect the effectiveness of the border patrol system are proposed. Individual and factorial analysis of these factors is conducted and some suggestions are made.Keywords: resource optimization, simulation, modeling, border security
Procedia PDF Downloads 51615518 Optimization for the Hydraulic Clamping System of an Internal Circulation Two-Platen Injection Molding Machine
Authors: Jian Wang, Lu Yang, Jiong Peng
Abstract:
Internal circulation two-platen clamping system for injection molding machine (IMM) has many potential advantages on energy-saving. In order to estimate its properties, experiments in this paper were carried out. Displacement and pressure of the components were measured. In comparison, the model of hydraulic clamping system was established by using AMESim. The related parameters as well as the energy consumption could be calculated. According to the analysis, the hydraulic system was optimized in order to reduce the energy consumption.Keywords: AMESim, energy-saving, injection molding machine, internal circulation
Procedia PDF Downloads 55015517 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning
Authors: Madhawa Basnayaka, Jouni Paltakari
Abstract:
Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.Keywords: artificial intelligence, chipless RFID, deep learning, machine learning
Procedia PDF Downloads 5015516 Reinforced Concrete Bridge Deck Condition Assessment Methods Using Ground Penetrating Radar and Infrared Thermography
Authors: Nicole M. Martino
Abstract:
Reinforced concrete bridge deck condition assessments primarily use visual inspection methods, where an inspector looks for and records locations of cracks, potholes, efflorescence and other signs of probable deterioration. Sounding is another technique used to diagnose the condition of a bridge deck, however this method listens for damage within the subsurface as the surface is struck with a hammer or chain. Even though extensive procedures are in place for using these inspection techniques, neither one provides the inspector with a comprehensive understanding of the internal condition of a bridge deck – the location where damage originates from. In order to make accurate estimates of repair locations and quantities, in addition to allocating the necessary funding, a total understanding of the deck’s deteriorated state is key. The research presented in this paper collected infrared thermography and ground penetrating radar data from reinforced concrete bridge decks without an asphalt overlay. These decks were of various ages and their condition varied from brand new, to in need of replacement. The goals of this work were to first verify that these nondestructive evaluation methods could identify similar areas of healthy and damaged concrete, and then to see if combining the results of both methods would provide a higher confidence than if the condition assessment was completed using only one method. The results from each method were presented as plan view color contour plots. The results from one of the decks assessed as a part of this research, including these plan view plots, are presented in this paper. Furthermore, in order to answer the interest of transportation agencies throughout the United States, this research developed a step-by-step guide which demonstrates how to collect and assess a bridge deck using these nondestructive evaluation methods. This guide addresses setup procedures on the deck during the day of data collection, system setups and settings for different bridge decks, data post-processing for each method, and data visualization and quantification.Keywords: bridge deck deterioration, ground penetrating radar, infrared thermography, NDT of bridge decks
Procedia PDF Downloads 15415515 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration
Authors: Danny Barash
Abstract:
Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods
Procedia PDF Downloads 23415514 The Effectiveness of Cathodic Protection on Microbiologically Influenced Corrosion Control
Authors: S. Taghavi Kalajahi, A. Koerdt, T. Lund Skovhus
Abstract:
Cathodic protection (CP) is an electrochemical method to control and manage corrosion in different industries and environments. CP which is widely used, especially in buried and sub-merged environments, which both environments are susceptible to microbiologically influenced corrosion (MIC). Most of the standards recommend performing CP using -800 mV, however, if MIC threats are high or sulfate reducing bacteria (SRB) is present, the recommendation is to use more negative potentials for adequate protection of the metal. Due to the lack of knowledge and research on the effectiveness of CP on MIC, to the author’s best knowledge, there is no information about what MIC threat is and how much more negative potentials should be used enabling adequate protection and not overprotection (due to hydrogen embrittlement risk). Recently, the development and cheaper price of molecular microbial methods (MMMs) open the door for more effective investigations on the corrosion in the presence of microorganisms, along with other electrochemical methods and surface analysis. In this work, using MMMs, the gene expression of SRB biofilm under different potentials of CP will be investigated. The specific genes, such as pH buffering, metal oxidizing, etc., will be compared at different potentials, enabling to determine the precise potential that protect the metal effectively from SRB. This work is the initial step to be able to standardize the recommended potential under MIC condition, resulting better protection for the infrastructures.Keywords: cathodic protection, microbiologically influenced corrosion, molecular microbial methods, sulfate reducing bacteria
Procedia PDF Downloads 9215513 Review of Transportation Modeling Software
Authors: Hassan M. Al-Ahmadi, Hamad Bader Almobayedh
Abstract:
Planning for urban transportation is essential for developing effective and sustainable transportation networks that meet the needs of various communities. Advanced modeling software is required for effective transportation planning, management, and optimization. This paper compares PTV VISUM, Aimsun, TransCAD, and Emme, four industry-leading software tools for transportation planning and modeling. Each software has strengths and limitations, and the project's needs, financial constraints, and level of technical expertise influence the choice of software. Transportation experts can design and improve urban transportation systems that are effective, sustainable, and meet the changing needs of their communities by utilizing these software tools.Keywords: PTV VISUM, Aimsun, TransCAD, transportation modeling software
Procedia PDF Downloads 3115512 Assessing Significance of Correlation with Binomial Distribution
Authors: Vijay Kumar Singh, Pooja Kushwaha, Prabhat Ranjan, Krishna Kumar Ojha, Jitendra Kumar
Abstract:
Present day high-throughput genomic technologies, NGS/microarrays, are producing large volume of data that require improved analysis methods to make sense of the data. The correlation between genes and samples has been regularly used to gain insight into many biological phenomena including, but not limited to, co-expression/co-regulation, gene regulatory networks, clustering and pattern identification. However, presence of outliers and violation of assumptions underlying Pearson correlation is frequent and may distort the actual correlation between the genes and lead to spurious conclusions. Here, we report a method to measure the strength of association between genes. The method assumes that the expression values of a gene are Bernoulli random variables whose outcome depends on the sample being probed. The method considers the two genes as uncorrelated if the number of sample with same outcome for both the genes (Ns) is equal to certainly expected number (Es). The extent of correlation depends on how far Ns can deviate from the Es. The method does not assume normality for the parent population, fairly unaffected by the presence of outliers, can be applied to qualitative data and it uses the binomial distribution to assess the significance of association. At this stage, we would not claim about the superiority of the method over other existing correlation methods, but our method could be another way of calculating correlation in addition to existing methods. The method uses binomial distribution, which has not been used until yet, to assess the significance of association between two variables. We are evaluating the performance of our method on NGS/microarray data, which is noisy and pierce by the outliers, to see if our method can differentiate between spurious and actual correlation. While working with the method, it has not escaped our notice that the method could also be generalized to measure the association of more than two variables which has been proven difficult with the existing methods.Keywords: binomial distribution, correlation, microarray, outliers, transcriptome
Procedia PDF Downloads 41515511 Modern Methods of Construction (MMC): The Potentials and Challenges of Using Prefabrication Technology for Building Modern Houses in Afghanistan
Authors: Latif Karimi, Yasuhide Mochida
Abstract:
The purpose of this paper is to study Modern Methods of Construction (MMC); specifically, the prefabrication technology and check the applicability, suitability, and benefits of this construction technique over conventional methods for building new houses in Afghanistan. Construction industry and house building sector are a key contributor to Afghanistan’s economy. However, this sector is challenged with lack of innovation and severe impacts that it has on the environment due to huge amount of construction waste from building, demolition and or renovation activities. This paper studies the prefabrication technology, a popular MMC that is becoming more common, improving in quality and being available in a variety of budgets. Several feasibility studies worldwide have revealed that this method is the way forward in improving construction industry performance as it has been proven to reduce construction time, construction wastes and improve the environmental performance of the construction processes. In addition, this study emphasizes on 'sustainability' in-house building, since it is a common challenge in housing construction projects on a global scale. This challenge becomes more severe in the case of under-developed countries, like Afghanistan. Because, most of the houses are being built in the absence of a serious quality control mechanism and dismissive to basic requirements of sustainable houses; well-being, cost-effectiveness, minimization - prevention of wastes production during construction and use, and severe environmental impacts in view of a life cycle assessment. Methodology: A literature review and study of the conventional practices of building houses in urban areas of Afghanistan. A survey is also being completed to study the potentials and challenges of using prefabrication technology for building modern houses in the cities across the country. A residential housing project is selected for case study to determine the drawbacks of current construction methods vs. prefabrication technique for building a new house. Originality: There are little previous research available about MMC considering its specific impacts on sustainability related to house building practices. This study will be specifically of interest to a broad range of people, including planners, construction managers, builders, and house owners.Keywords: modern methods of construction (MMC), prefabrication, prefab houses, sustainable construction, modern houses
Procedia PDF Downloads 24315510 Challenges of Implementing Zero Trust Security Based on NIST SP 800-207
Authors: Mazhar Hamayun
Abstract:
Organizations need to take a holistic approach to their Zero Trust strategic and tactical security needs. This includes using a framework-agnostic model that will ensure all enterprise resources are being accessed securely, regardless of their location. Such can be achieved through the implementation of a security posture, monitoring the posture, and adjusting the posture through the Identify, Detect, Protect, Respond, and Recover Methods, The target audience of this document includes those involved in the management and operational functions of risk, information security, and information technology. This audience consists of the chief information security officer, chief information officer, chief technology officer, and those leading digital transformation initiatives where Zero Trust methods can help protect an organization’s data assets.Keywords: ZTNA, zerotrust architecture, microsegmentation, NIST SP 800-207
Procedia PDF Downloads 8715509 Experimental Studies on the Effect of Premixing Methods in Anaerobic Digestor with Corn Stover
Authors: M. Sagarika, M. Chandra Sekhar
Abstract:
Agricultural residues are producing in large quantities in India and account for abundant but underutilized source of renewable biomass in agriculture. In India, the amount of crop residues available is estimated to be approximately 686 million tons. Anaerobic digestion is a promising option to utilize the surplus agricultural residues and can produce biogas and digestate. Biogas is mainly methane (CH4), which can be utilized as an energy source in replacement for fossil fuels such as natural gas, oil, in other hand, digestate contains high amounts of nutrients, can be employed as fertilizer. Solid state anaerobic digestion (total solids ≥ 15%) is suitable for agricultural residues, as it reduces the problems like stratification and floating issues that occur in liquid anaerobic digestion (total solids < 15%). The major concern in solid-state anaerobic digestion is the low mass transfer of feedstock and inoculum that resulting in low performance. To resolve this low mass transfer issue, effective mixing of feedstock and inoculum is required. Mechanical mixing using stirrer at the time of digestion process can be done, but it is difficult to operate the stirring of feedstock with high solids percentage and high viscosity. Complete premixing of feedstock and inoculum is an alternative method, which is usual in lab scale studies but may not be affordable due to high energy demand in large-scale digesters. Developing partial premixing methods may reduce this problem. Current study is to improve the performance of solid-state anaerobic digestion of corn stover at feedstock to inoculum ratios 3 and 5, by applying partial premixing methods and to compare the complete premixing method with two partial premixing methods which are two alternative layers of feedstock and inoculum and three alternative layers of feedstock and inoculum where higher inoculum ratios in the top layers. From experimental studies it is observed that, partial premixing method with three alternative layers of feedstock and inoculum yielded good methane.Keywords: anaerobic digestion, premixing methods, methane yield, corn stover, volatile solids
Procedia PDF Downloads 23415508 Solving 94-Bit ECDLP with 70 Computers in Parallel
Authors: Shunsuke Miyoshi, Yasuyuki Nogami, Takuya Kusaka, Nariyoshi Yamai
Abstract:
Elliptic curve discrete logarithm problem (ECDLP) is one of problems on which the security of pairing-based cryptography is based. This paper considers Pollard's rho method to evaluate the security of ECDLP on Barreto-Naehrig (BN) curve that is an efficient pairing-friendly curve. Some techniques are proposed to make the rho method efficient. Especially, the group structure on BN curve, distinguished point method, and Montgomery trick are well-known techniques. This paper applies these techniques and shows its optimization. According to the experimental results for which a large-scale parallel system with MySQL is applied, 94-bit ECDLP was solved about 28 hours by parallelizing 71 computers.Keywords: Pollard's rho method, BN curve, Montgomery multiplication
Procedia PDF Downloads 27215507 Creativity in Industrial Design as an Instrument for the Achievement of the Proper and Necessary Balance between Intuition and Reason, Design and Science
Authors: Juan Carlos Quiñones
Abstract:
Time has passed since the industrial design has put murder on a mass-production basis. The industrial design applies methods from different disciplines with a strategic approach, to place humans at the centers of the design process and to deliver solutions that are meaningful and desirable for users and for the market. This analysis summarizes some of the discussions that occurred in the 6th International Forum of Design as a Process, June 2016, Valencia. The aims of this conference were finding new linkages between systems and design interactions in order to define the social consequences. Through knowledge management we are able to transform the intangible aspect by using design as a transforming function capable of converting intangible knowledge into tangible solutions (i.e. products and services demanded by society). Industrial designers use knowledge consciously as a starting point for the ideation of the product. The handling of the intangible becomes more and more relevant over time as different methods emerge for knowledge extraction and subsequent organization. The different methodologies applied to the industrial design discipline and the evolution of the same discipline methods underpin the cultural and scientific background knowledge as a starting point of thought as a response to the needs; the whole thing coming through the instrument of creativity for the achievement of the proper and necessary balance between intuition and reason, design and science.Keywords: creative process, creativity, industrial design, intangible
Procedia PDF Downloads 28715506 Active Cyber Defense within the Concept of NATO’s Protection of Critical Infrastructures
Authors: Serkan Yağlı, Selçuk Dal
Abstract:
Cyber-attacks pose a serious threat to all states. Therefore, states constantly seek for various methods to encounter those threats. In addition, recent changes in the nature of cyber-attacks and their more complicated methods have created a new concept: active cyber defence (ACD). This article tries to answer firstly why ACD is important to NATO and find out the viewpoint of NATO towards ACD. Secondly, infrastructure protection is essential to cyber defence. Critical infrastructure protection with ACD means is even more important. It is assumed that by implementing active cyber defence, NATO may not only be able to repel the attacks but also be deterrent. Hence, the use of ACD has a direct positive effect in all international organizations’ future including NATO.Keywords: active cyber defence, advanced persistent treat, critical infrastructure, NATO
Procedia PDF Downloads 24415505 Simulation of Obstacle Avoidance for Multiple Autonomous Vehicles in a Dynamic Environment Using Q-Learning
Authors: Andreas D. Jansson
Abstract:
The availability of inexpensive, yet competent hardware allows for increased level of automation and self-optimization in the context of Industry 4.0. However, such agents require high quality information about their surroundings along with a robust strategy for collision avoidance, as they may cause expensive damage to equipment or other agents otherwise. Manually defining a strategy to cover all possibilities is both time-consuming and counter-productive given the capabilities of modern hardware. This paper explores the idea of a model-free self-optimizing obstacle avoidance strategy for multiple autonomous agents in a simulated dynamic environment using the Q-learning algorithm.Keywords: autonomous vehicles, industry 4.0, multi-agent system, obstacle avoidance, Q-learning, simulation
Procedia PDF Downloads 13815504 Sterilization of Potato Explants for in vitro Propagation
Authors: D. R. Masvodza, G. Coetzer, E. van der Watt
Abstract:
Microorganisms usually have a prolific growth nature and may cause major problems on in-vitro cultures. For in vitro propagation to be successful explants need to be sterile. In order to determine the best sterilization method for potato explants cv. Amerthyst, five sterilization methods were applied separately to 24 shoots. The first sterilization method was the use of 20% sodium hypochlorite with 1 ml Tween 20 for 15 minutes. The second, third and fourth sterilization methods were the immersion of explants in 70% ethanol in a beaker for either 30 seconds, 1 minute or 2 minutes, followed by 1% sodium hypochlorite with 1 ml Tween 20 for 5 minutes. For the control treatment, no chemicals were used. Finally, all the explants were rinsed three times with autoclaved distilled water and trimmed to 1-2 cm. Explants were then cultured on MS medium with 0.01 mg L-1 NAA and 0.1 mg L-1 GA3 and supplemented with 2 mg L-1 D-calcium pentothenate. The trial was laid out as a complete randomized design, and each treatment combination was replicated 24 times. At 7, 14 and 21 days after culture, data on explant color, survival, and presence or absence of contamination was recorded. Best results were obtained when 20% sodium hypochlorite was used with 1 ml Tween 20 for 15 minutes which is sterilization method 1. Method 2 was comparable to method 1 when explants were cultured in glass vessels. Explants in glass vessels were significantly less contaminated than explants in polypropylene vessel. Therefore at times, ideal methods for sterilization should be coupled with ideal culture conditions such as good quality culture vessel, rather than the addition of more stringent sterilants.Keywords: culture containers, explants, sodium hypochlororite, sterilization
Procedia PDF Downloads 33215503 Comparisons between Student Leaning Achievements and Their Problem Solving Skills on Stoichiometry Issue with the Think-Pair-Share Model and Stem Education Method
Authors: P. Thachitasing, N. Jansawang, W. Rakrai, T. Santiboon
Abstract:
The aim of this study is to investigate of the comparing the instructional design models between the Think-Pair-Share and Conventional Learning (5E Inquiry Model) Processes to enhance students’ learning achievements and their problem solving skills on stoichiometry issue for concerning the 2-instructional method with a sample consisted of 80 students in 2 classes at the 11th grade level in Chaturaphak Phiman Ratchadaphisek School. Students’ different learning outcomes in chemistry classes with the cluster random sampling technique were used. Instructional Methods designed with the 40-experimenl student group by Think-Pair-Share process and the 40-controlling student group by the conventional learning (5E Inquiry Model) method. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of Think-Pair-Share and STEM Education Method, students’ learning achievements and their problem solving skills were assessed with the pretest and posttest techniques, students’ outcomes of their instructional the Think-Pair-Share (TPSM) and the STEM Education Methods were compared. Statistically significant was differences with the paired t-test and F-test between posttest and pretest technique of the whole students in chemistry classes were found, significantly. Associations between student learning outcomes in chemistry and two methods of their learning to students’ learning achievements and their problem solving skills also were found. The use of two methods for this study is revealed that the students perceive their learning achievements to their problem solving skills to be differently learning achievements in different groups are guiding practical improvements in chemistry classrooms to assist teacher in implementing effective approaches for improving instructional methods. Students’ learning achievements of mean average scores to their controlling group with the Think-Pair-Share Model (TPSM) are lower than experimental student group for the STEM education method, evidence significantly. The E1/E2 process were revealed evidence of 82.56/80.44, and 83.02/81.65 which results based on criteria are higher than of 80/80 standard level with the IOC, consequently. The predictive efficiency (R2) values indicate that 61% and 67% and indicate that 63% and 67% of the variances in chemistry classes to their learning achievements on posttest in chemistry classes of the variances in students’ problem solving skills to their learning achievements to their chemistry classrooms on Stoichiometry issue with the posttest were attributable to their different learning outcomes for the TPSM and STEMe instructional methods.Keywords: comparisons, students’ learning achievements, think-pare-share model (TPSM), stem education, problem solving skills, chemistry classes, stoichiometry issue
Procedia PDF Downloads 24915502 Modeling and Validation of Microspheres Generation in the Modified T-Junction Device
Authors: Lei Lei, Hongbo Zhang, Donald J. Bergstrom, Bing Zhang, K. Y. Song, W. J. Zhang
Abstract:
This paper presents a model for a modified T-junction device for microspheres generation. The numerical model is developed using a commercial software package: COMSOL Multiphysics. In order to test the accuracy of the numerical model, multiple variables, such as the flow rate of cross-flow, fluid properties, structure, and geometry of the microdevice are applied. The results from the model are compared with the experimental results in the diameter of the microsphere generated. The comparison shows a good agreement. Therefore the model is useful in further optimization of the device and feedback control of microsphere generation if any.Keywords: CFD modeling, validation, microsphere generation, modified T-junction
Procedia PDF Downloads 70715501 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network
Authors: Gulfam Haider, sana danish
Abstract:
Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent
Procedia PDF Downloads 12515500 Electric Vehicles Charging Stations: Strategies and Algorithms Integrated in a Power-Sharing Model
Authors: Riccardo Loggia, Francesca Pizzimenti, Francesco Lelli, Luigi Martirano
Abstract:
Recent air emission regulations point toward the complete electrification of road vehicles. An increasing number of users are beginning to prefer full electric or hybrid, plug-in vehicle solutions, incentivized by government subsidies and the lower cost of electricity compared to gasoline or diesel. However, it is necessary to optimize charging stations so that they can simultaneously satisfy as many users as possible. The purpose of this paper is to present optimization algorithms that enable simultaneous charging of multiple electric vehicles while ensuring maximum performance in relation to the type of charging station.Keywords: electric vehicles, charging stations, sharing model, fast charging, car park, power profiles
Procedia PDF Downloads 15415499 An Approximation Algorithm for the Non Orthogonal Cutting Problem
Abstract:
We study the problem of cutting a rectangular material entity into smaller sub-entities of trapezoidal forms with minimum waste of the material. This problem will be denoted TCP (Trapezoidal Cutting Problem). The TCP has many applications in manufacturing processes of various industries: pipe line design (petro chemistry), the design of airfoil (aeronautical) or cuts of the components of textile products. We introduce an orthogonal build to provide the optimal horizontal and vertical homogeneous strips. In this paper we develop a general heuristic search based upon orthogonal build. By solving two one-dimensional knapsack problems, we combine the horizontal and vertical homogeneous strips to give a non orthogonal cutting pattern.Keywords: combinatorial optimization, cutting problem, heuristic
Procedia PDF Downloads 54115498 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations
Authors: Kuniyoshi Abe
Abstract:
Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant
Procedia PDF Downloads 16415497 A Framework for Auditing Multilevel Models Using Explainability Methods
Authors: Debarati Bhaumik, Diptish Dey
Abstract:
Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics
Procedia PDF Downloads 9415496 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices
Authors: S. Srinivasan, E. Cretu
Abstract:
The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape
Procedia PDF Downloads 13615495 Stress Corrosion Cracking, Parameters Affecting It, Problems Caused by It and Suggested Methods for Treatment: State of the Art
Authors: Adnan Zaid
Abstract:
Stress corrosion cracking (SCC) may be defined as a degradation of the mechanical properties of a material under the combined action of a tensile stress and corrosive environment of the susceptible material. It is a harmful phenomenon which might cause catastrophic fracture without a sign of prior warning. In this paper, the stress corrosion cracking, SCC, process, the parameters affecting it, and the different damages caused by it are given and discussed. Utilization of shot peening as a mean of enhancing the resistance of materials to SCC is given and discussed. Finally, a method for improving materials resistance to SCC by grain refining its structure by some refining elements prior to usage is suggested.Keywords: stress corrosion cracking, parameters, damages, treatment methods
Procedia PDF Downloads 33015494 Flow Behavior and Performances of Centrifugal Compressor Stage Vaneless Diffusers
Authors: Y.Galerkin, O. Solovieva
Abstract:
Flow parameters are calculated in vaneless diffusers with relative width 0,014 – 0,10 constant along radii. Inlet flow angles and similarity criteria were varied. Information about flow structure is presented – meridian streamlines configuration, information on flow full development, flow separation. Polytrophic efficiency, loss and recovery coefficient are used to compare diffusers’ effectiveness. The sample of narrow diffuser optimization by conical walls application is presented. Three tampered variants of a wide diffuser are compared too. The work is made in the R&D laboratory “Gas dynamics of turbo machines” of the TU SPb.Keywords: vaneless diffuser, relative width, flow angle, flow separation, loss coefficient, similarity criteria
Procedia PDF Downloads 49015493 Studies on the Proximate Composition and Functional Properties of Extracted Cocoyam Starch Flour
Authors: Adebola Ajayi, Francis B. Aiyeleye, Olakunke M. Makanjuola, Olalekan J. Adebowale
Abstract:
Cocoyam, a generic term for both xanthoma and colocasia, is a traditional staple root crop in many developing countries in Africa, Asia and the Pacific. It is mostly cultivated as food crop which is very rich in vitamin B6, magnesium and also in dietary fiber. The cocoyam starch is easily digested and often used for baby food. Drying food is a method of food preservation that removes enough moisture from the food so bacteria, yeast and molds cannot grow. It is a one of the oldest methods of preserving food. The effect of drying methods on the proximate composition and functional properties of extracted cocoyam starch flour were studied. Freshly harvested cocoyam cultivars at matured level were washed with portable water, peeled, washed and grated. The starch in the grated cocoyam was extracted, dried using sun drying, oven and cabinet dryers. The extracted starch flour was milled into flour using Apex mill and packed and sealed in low-density polyethylene film (LDPE) 75 micron thickness with Nylon sealing machine QN5-3200HI and kept for three months under ambient temperature before analysis. The result showed that the moisture content, ash, crude fiber, fat, protein and carbohydrate ranged from 6.28% to 12.8% 2.32% to 3.2%, 0.89% to 2.24%%, 1.89% to 2.91%, 7.30% to 10.2% and 69% to 83% respectively. The functional properties of the cocoyam starch flour ranged from 2.65ml/g to 4.84ml/g water absorption capacity, 1.95ml/g to 3.12ml/g oil absorption capacity, 0.66ml/g to 7.82ml/g bulk density and 3.82% to 5.30ml/g swelling capacity. Significant difference (P≥0.5) was not obtained across the various drying methods used. The drying methods provide extension to the shelf-life of the extracted cocoyam starch flour.Keywords: cocoyam, extraction, oven dryer, cabinet dryer
Procedia PDF Downloads 29515492 Optimization and Evaluation of 177lu-Dotatoc as a Potential Agent for Peptide Receptor Radionuclide Therapy
Authors: H. Yousefnia, MS. Mousavi-Daramoroudi, S. Zolghadri, F. Abbasi-Davani
Abstract:
High expression of somatostatin receptors on a wide range of human tumours makes them as potential targets for peptide receptor radionuclide tomography. A series of octreotide analogues were synthesized while [DOTA-DPhe1, Tyr3]octreotide (DOTATOC) indicated advantageous properties in tumour models. In this study, 177Lu-DOTATOC was prepared with the radiochemical purity of higher than 99% in 30 min at the optimized condition. Biological behavior of the complex was studied after intravenous injection into the Syrian rats. Major difference uptake was observed compared to 177LuCl3 solution especially in somatostatin receptor-positive tissues such as pancreas and adrenal.Keywords: Biodistribution, 177Lu, Octreotide, Syrian rats
Procedia PDF Downloads 44815491 Split Monotone Inclusion and Fixed Point Problems in Real Hilbert Spaces
Authors: Francis O. Nwawuru
Abstract:
The convergence analysis of split monotone inclusion problems and fixed point problems of certain nonlinear mappings are investigated in the setting of real Hilbert spaces. Inertial extrapolation term in the spirit of Polyak is incorporated to speed up the rate of convergence. Under standard assumptions, a strong convergence of the proposed algorithm is established without computing the resolvent operator or involving Yosida approximation method. The stepsize involved in the algorithm does not depend on the spectral radius of the linear operator. Furthermore, applications of the proposed algorithm in solving some related optimization problems are also considered. Our result complements and extends numerous results in the literature.Keywords: fixedpoint, hilbertspace, monotonemapping, resolventoperators
Procedia PDF Downloads 5215490 Tuned Mass Damper Vibration Control of Pedestrian Bridge
Authors: Qinglin Shu
Abstract:
Based on the analysis of the structural vibration comfort of a domestic bridge, this paper studies the vibration reduction control principle of TMD, the derivation process of design parameter optimization and how to simulate TMD in the finite element software ANSYS. The research shows that, in view of the problem that the comfort level of a bridge exceeds the limit in individual working conditions, the vibration reduction control design of the bridge can effectively reduce the vibration of the structure by using TMD. Calculations show that when the mass ratio of TMD is 0.01, the vibration reduction rate under different working conditions is more than 90%, and the dynamic displacement of the TMD mass block is within 0.01m, indicating that the design of TMD is reasonable and safe.Keywords: pedestrian bridges, human-induced vibration, comfort, tuned mass dampers
Procedia PDF Downloads 114