Search results for: optimization process
3649 Validation and Application of a New Optimized RP-HPLC-Fluorescent Detection Method for Norfloxacin
Authors: Mahmood Ahmad, Ghulam Murtaza, Sonia Khiljee, Muhammad Asadullah Madni
Abstract:
A new reverse phase-high performance liquid chromatography (RP-HPLC) method with fluorescent detector (FLD) was developed and optimized for Norfloxacin determination in human plasma. Mobile phase specifications, extraction method and excitation and emission wavelengths were varied for optimization. HPLC system contained a reverse phase C18 (5 μm, 4.6 mm×150 mm) column with FLD operated at excitation 330 nm and emission 440 nm. The optimized mobile phase consisted of 14% acetonitrile in buffer solution. The aqueous phase was prepared by mixing 2g of citric acid, 2g sodium acetate and 1 ml of triethylamine in 1 L of Milli-Q water was run at a flow rate of 1.2 mL/min. The standard curve was linear for the range tested (0.156–20 μg/mL) and the coefficient of determination was 0.9978. Aceclofenac sodium was used as internal standard. A detection limit of 0.078 μg/mL was achieved. Run time was set at 10 minutes because retention time of norfloxacin was 0.99 min. which shows the rapidness of this method of analysis. The present assay showed good accuracy, precision and sensitivity for Norfloxacin determination in human plasma with a new internal standard and can be applied pharmacokinetic evaluation of Norfloxacin tablets after oral administration in human.
Keywords: Norfloxacin, Aceclofenac sodium, Methodoptimization, RP-HPLC method, Fluorescent detection, Calibrationcurve.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21053648 Optimization of Double Wishbone Suspension System with Variable Camber Angle by Hydraulic Mechanism
Authors: Mohammad Iman Mokhlespour Esfahani, Masoud Mosayebi, Mohammad Pourshams, Ahmad Keshavarzi
Abstract:
Simulation accuracy by recent dynamic vehicle simulation multidimensional expression significantly has progressed and acceptable results not only for passive vehicles but also for active vehicles normally equipped with advanced electronic components is also provided. Recently, one of the subjects that has it been considered, is increasing the safety car in design. Therefore, many efforts have been done to increase vehicle stability especially in the turn. One of the most important efforts is adjusting the camber angle in the car suspension system. Optimum control camber angle in addition to the vehicle stability is effective in the wheel adhesion on road, reducing rubber abrasion and acceleration and braking. Since the increase or decrease in the camber angle impacts on the stability of vehicles, in this paper, a car suspension system mechanism is introduced that could be adjust camber angle and the mechanism is application and also inexpensive. In order to reach this purpose, in this paper, a passive double wishbone suspension system with variable camber angle is introduced and then variable camber mechanism designed and analyzed for study the designed system performance, this mechanism is modeled in Visual Nastran software and kinematic analysis is revealed.Keywords: Suspension molding, double wishbone, variablecamber, hydraulic mechanism
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77243647 3D Human Reconstruction over Cloud Based Image Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Human action recognition (HAR) modeling is a critical task in machine learning. These systems require better techniques for recognizing body parts and selecting optimal features based on vision sensors to identify complex action patterns efficiently. Still, there is a considerable gap and challenges between images and videos, such as brightness, motion variation, and random clutters. This paper proposes a robust approach for classifying human actions over cloud-based image data. First, we apply pre-processing and detection, human and outer shape detection techniques. Next, we extract valuable information in terms of cues. We extract two distinct features: fuzzy local binary patterns and sequence representation. Then, we applied a greedy, randomized adaptive search procedure for data optimization and dimension reduction, and for classification, we used a random forest. We tested our model on two benchmark datasets, AAMAZ and the KTH Multi-view Football datasets. Our HAR framework significantly outperforms the other state-of-the-art approaches and achieves a better recognition rate of 91% and 89.6% over the AAMAZ and KTH Multi-view Football datasets, respectively.
Keywords: Computer vision, human motion analysis, random forest, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 383646 Solving a New Mixed-Model Assembly LineSequencing Problem in a MTO Environment
Authors: N. Manavizadeh, M. Hosseini, M. Rabbani
Abstract:
In the last decades to supply the various and different demands of clients, a lot of manufacturers trend to use the mixedmodel assembly line (MMAL) in their production lines, since this policy make possible to assemble various and different models of the equivalent goods on the same line with the MTO approach. In this article, we determine the sequence of (MMAL) line, with applying the kitting approach and planning of rest time for general workers to reduce the wastages, increase the workers effectiveness and apply the sector of lean production approach. This Multi-objective sequencing problem solved in small size with GAMS22.2 and PSO meta heuristic in 10 test problems and compare their results together and conclude that their results are very similar together, next we determine the important factors in computing the cost, which improving them cost reduced. Since this problem, is NPhard in large size, we use the particle swarm optimization (PSO) meta-heuristic for solving it. In large size we define some test problems to survey it-s performance and determine the important factors in calculating the cost, that by change or improved them production in minimum cost will be possible.Keywords: Mixed-Model Assembly Line, particle swarmoptimization, Multi-objective sequencing problem, MTO system, kitto-assembly, rest time
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20373645 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: Laser scribing, LightScribe DVD, graphene oxide, scanning electron microscopy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6633644 In Cognitive Radio the Analysis of Bit-Error- Rate (BER) by using PSO Algorithm
Authors: Shrikrishan Yadav, Akhilesh Saini, Krishna Chandra Roy
Abstract:
The electromagnetic spectrum is a natural resource and hence well-organized usage of the limited natural resources is the necessities for better communication. The present static frequency allocation schemes cannot accommodate demands of the rapidly increasing number of higher data rate services. Therefore, dynamic usage of the spectrum must be distinguished from the static usage to increase the availability of frequency spectrum. Cognitive radio is not a single piece of apparatus but it is a technology that can incorporate components spread across a network. It offers great promise for improving system efficiency, spectrum utilization, more effective applications, reduction in interference and reduced complexity of usage for users. Cognitive radio is aware of its environmental, internal state, and location, and autonomously adjusts its operations to achieve designed objectives. It first senses its spectral environment over a wide frequency band, and then adapts the parameters to maximize spectrum efficiency with high performance. This paper only focuses on the analysis of Bit-Error-Rate in cognitive radio by using Particle Swarm Optimization Algorithm. It is theoretically as well as practically analyzed and interpreted in the sense of advantages and drawbacks and how BER affects the efficiency and performance of the communication system.Keywords: BER, Cognitive Radio, Environmental Parameters, PSO, Radio spectrum, Transmission Parameters
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21553643 Biodiesel Production from Broiler Chicken Waste
Authors: John Abraham, Ramesh Saravana Kumar, Francis Xavier, Deepak Mathew
Abstract:
Broiler slaughter waste has become a major source of pollution throughout the world. Utilization of broiler slaughter waste by dry rendering process produced Rendered Chicken Oil (RCO), a cheap raw material for biodiesel production and Carcass Meal a feed ingredient for pets and fishes. Conversion of RCO into biodiesel may open new vistas for generating wealth from waste besides controlling the major havoc of environmental pollution. A two-step process to convert RCO to good quality Biodiesel was invented. Acid catalysed esterification of FFA followed by base catalysed transesterification of triglycerides was carried out after meticulously standardizing the methanol molar ratio, catalyst concentration, reaction temperature, and reaction time to obtain the maximum biodiesel yield of 97.62% and lowest glycerol yield of 6.96%. RCO biodiesel blend was tested in a CRDI diesel engine. The results revealed that the blending of commercial diesel with 20% RCO biodiesel (B20) lead to less engine wear, a quieter engine and better fuel economy. The better lubricating qualities of RCO B20 prevented over heating of engine, which prolongs the engine life. RCO B20 can reduce the import of crude oil and substantially reduce the engine emissions as proved by significantly lower smoke levels, thus mitigating climatic changes.Keywords: Biodiesel, Broiler Waste, Engine Testing, Rendered Chicken Oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55073642 Improvement of Model for SIMMER Code for SFR Corium Relocation Studies
Authors: A. Bachrata, N. Marie, F. Bertrand, J. B. Droin
Abstract:
The in-depth understanding of severe accident propagation in Generation IV of nuclear reactors is important so that appropriate risk management can be undertaken early in their design process. This paper is focused on model improvements in the SIMMER code in order to perform studies of severe accident mitigation of Sodium Fast Reactor. During the design process of the mitigation devices dedicated to extraction of molten fuel from the core region, the molten fuel propagation from the core up to the core catcher has to be studied. In this aim, analytical as well as the complex thermohydraulic simulations with SIMMER-III code are performed. The studies presented in this paper focus on physical phenomena and associated physical models that influence the corium relocation. Firstly, the molten pool heat exchange with surrounding structures is analyzed since it influences directly the instant of rupture of the dedicated tubes favoring the corium relocation for mitigation purpose. After the corium penetration into mitigation tubes, the fuel-coolant interactions result in formation of debris bed. Analyses of debris bed fluidization as well as sinking into a fluid are presented in this paper.
Keywords: Corium, mitigation tubes, SIMMER-III, sodium fast reactor (SFR).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28853641 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services
Authors: G. Feletti, D. Tedesco, P. Trucco
Abstract:
The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of a first phase of revision of the technical-scientific literature concerning the indicators currently in use for the performance measurement of EMS. It emerges that current studies focus on two distinct areas and independent objectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). Conversely, the perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal covers the end-to-end healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid even to EMS aspects that in current literature tend to be neglected or underestimated. In particular, the integration of the two processes enables to evaluate the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering, besides the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating firstly the ones not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draw us to exclude additional indicators due to unavailability of data required for their computation. The final dashboard, that was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness on EDs accessibility in real time. The association of each KPI to the EMS phase it refers to enabled the design of a well-balanced dashboard, covering both efficiency and effectiveness performance objectives of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient care are covered by traditional KPIs. Future developments could be directed to building a hierarchical dashboard, composed by a high-level minimal set of KPIs for measuring the basic performance of the EMS system, at an aggregate level, and lower levels of KPIs that bring additional and more detailed information on specific performance dimensions or EMS phases.
Keywords: Emergency Medical Services, Key Performance Indicators, Dashboard, Decision Support.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4723640 LFC Design of a Deregulated Power System with TCPS Using PSO
Authors: H. Shayeghi, H.A. Shayanfar, A. Jalili
Abstract:
In the LFC problem, the interconnections among some areas are the input of disturbances, and therefore, it is important to suppress the disturbances by the coordination of governor systems. In contrast, tie-line power flow control by TCPS located between two areas makes it possible to stabilize the system frequency oscillations positively through interconnection, which is also expected to provide a new ancillary service for the further power systems. Thus, a control strategy using controlling the phase angle of TCPS is proposed for provide active control facility of system frequency in this paper. Also, the optimum adjustment of PID controller's parameters in a robust way under bilateral contracted scenario following the large step load demands and disturbances with and without TCPS are investigated by Particle Swarm Optimization (PSO), that has a strong ability to find the most optimistic results. This newly developed control strategy combines the advantage of PSO and TCPS and has simple stricture that is easy to implement and tune. To demonstrate the effectiveness of the proposed control strategy a three-area restructured power system is considered as a test system under different operating conditions and system nonlinearities. Analysis reveals that the TCPS is quite capable of suppressing the frequency and tie-line power oscillations effectively as compared to that obtained without TCPS for a wide range of plant parameter changes, area load demands and disturbances even in the presence of system nonlinearities.
Keywords: LFC, TCPS, Dregulated Power System, PowerSystem Control, PSO.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20693639 Power System Stability Improvement by Simultaneous Tuning of PSS and SVC Based Damping Controllers Employing Differential Evolution Algorithm
Authors: Sangram Keshori Mohapatra, Sidhartha Panda, Prasant Kumar Satpathy
Abstract:
Power-system stability improvement by simultaneous tuning of power system stabilizer (PSS) and a Static Var Compensator (SVC) based damping controller is thoroughly investigated in this paper. Both local and remote signals with associated time delays are considered in the present study. The design problem of the proposed controller is formulated as an optimization problem, and differential evolution (DE) algorithm is employed to search for the optimal controller parameters. The performances of the proposed controllers are evaluated under different disturbances for both single-machine infinite bus power system and multi-machine power system. The performance of the proposed controllers with variations in the signal transmission delays has also been investigated. The proposed stabilizers are tested on a weakly connected power system subjected to different disturbances. Nonlinear simulation results are presented to show the effectiveness and robustness of the proposed control schemes over a wide range of loading conditions and disturbances. Further, the proposed design approach is found to be robust and improves stability effectively even under small disturbance conditions.
Keywords: Differential Evolution Algorithm, Power System Stability, Power System Stabilizer, Static Var Compensator
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23393638 Development of Elementary Literacy in the Czech Republic
Authors: Iva Košek Bartošová
Abstract:
There is great attention being paid in the field of development of first reading, thus early literacy skills in the Czech Republic. Yet inconclusive results of PISA and PIRLS force us to think over the teacher´s work, his/her roles in the education process and methods and forms used in lessons. There is also a significant importance to monitor the family environment and the pupil, themselves. The aim of the publishing output is to focus on one side dealing with methods of practicing reading technique and their results in the process of comprehension. In the first part of the contribution there are the goals of development of reading literacy and the methods used in reading practice in some EU countries and a follow-up comparison of research implemented by the help of modern technology of an eye tracker device in the year 2015 and a research conducted at the Institute of Education and Psychological Counselling of the Czech Republic in the year 2011/12. These are the results of a diagnostic test of reading in first classes of primary schools, taught by the genetic method and analytic-synthetic method. The results show that in the first stage of practice there are no statistically significant differences between any researched subjects taught by different methods of reading practice (with the use of several diagnostic texts focused on reading technique and its comprehension). Different results are shown at the end of Grade One and during Grade Two of primary school.
Keywords: Elementary literacy, eye tracker device, diagnostic reading tests, reading teaching method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10813637 The Application of Dynamic Network Process to Environment Planning Support Systems
Authors: Wann-Ming Wey
Abstract:
In recent years, in addition to face the external threats such as energy shortages and climate change, traffic congestion and environmental pollution have become anxious problems for many cities. Considering private automobile-oriented urban development had produced many negative environmental and social impacts, the transit-oriented development (TOD) has been considered as a sustainable urban model. TOD encourages public transport combined with friendly walking and cycling environment designs, however, non-motorized modes help improving human health, energy saving, and reducing carbon emissions. Due to environmental changes often affect the planners’ decision-making; this research applies dynamic network process (DNP) which includes the time dependent concept to promoting friendly walking and cycling environmental designs as an advanced planning support system for environment improvements.
This research aims to discuss what kinds of design strategies can improve a friendly walking and cycling environment under TOD. First of all, we collate and analyze environment designing factors by reviewing the relevant literatures as well as divide into three aspects of “safety”, “convenience”, and “amenity” from fifteen environment designing factors. Furthermore, we utilize fuzzy Delphi Technique (FDT) expert questionnaire to filter out the more important designing criteria for the study case. Finally, we utilized DNP expert questionnaire to obtain the weights changes at different time points for each design criterion. Based on the changing trends of each criterion weight, we are able to develop appropriate designing strategies as the reference for planners to allocate resources in a dynamic environment. In order to illustrate the approach we propose in this research, Taipei city as one example has been used as an empirical study, and the results are in depth analyzed to explain the application of our proposed approach.
Keywords: Environment Planning Support Systems, Walking and Cycling, Transit-oriented Development (TOD), Dynamic Network Process (DNP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18503636 Analytical Formulae for the Approach Velocity Head Coefficient
Authors: Abdulrahman Abdulrahman
Abstract:
Critical depth meters, such as abroad crested weir, Venture Flume and combined control flume are standard devices for measuring flow in open channels. The discharge relation for these devices cannot be solved directly, but it needs iteration process to account for the approach velocity head. In this paper, analytical solution was developed to calculate the discharge in a combined critical depth-meter namely, a hump combined with lateral contraction in rectangular channel with subcritical approach flow including energy losses. Also analytical formulae were derived for approach velocity head coefficient for different types of critical depth meters. The solution was derived by solving a standard cubic equation considering energy loss on the base of trigonometric identity. The advantage of this technique is to avoid iteration process adopted in measuring flow by these devices. Numerical examples are chosen for demonstration of the proposed solution.
Keywords: Broad crested weir, combined control meter, control structures, critical flow, discharge measurement, flow control, hydraulic engineering, hydraulic structures, open channel flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10283635 Numerical Investigation of the Evaporation and Mixing of UWS in a Diesel Exhaust Pipe
Authors: Tae Hyun Ahn, Gyo Woo Lee, Man Young Kim
Abstract:
Because of high thermal efficiency and low CO2 emission, diesel engines are being used widely in many industrial fields although it makes many PM and NOx which give both human health and environment a negative effect. NOx regulations for diesel engines, however, are being strengthened and it is impossible to meet the emission standard without NOx reduction devices such as SCR (Selective Catalytic Reduction), LNC (Lean NOx Catalyst), and LNT (Lean NOx Trap). Among the NOx reduction devices, urea-SCR system is known as the most stable and efficient method to solve the problem of NOx emission. But this device has some issues associated with the ammonia slip phenomenon which is occurred by shortage of evaporation and thermolysis time, and that makes it difficult to achieve uniform distribution of the injected urea in front of monolith. Therefore, this study has focused on the mixing enhancement between urea and exhaust gases to enhance the efficiency of the SCR catalyst equipped in catalytic muffler by changing inlet gas temperature and spray conditions to improve the spray uniformity of the urea water solution. Finally, it can be found that various parameters such as inlet gas temperature and injector and injection angles significantly affect the evaporation and mixing of the urea water solution with exhaust gases, and therefore, optimization of these parameters are required.
Keywords: Evaporation, Injection, Selective Catalytic Reduction (SCR), Thermolysis, UWS (Urea-Water-Solution).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28903634 Tensile Properties of 3D Printed PLA under Unidirectional and Bidirectional Raster Angle: A Comparative Study
Authors: Shilpesh R. Rajpurohit, Harshit K. Dave
Abstract:
Fused deposition modeling (FDM) gains popularity in recent times, due to its capability to create prototype as well as functional end use product directly from CAD file. Parts fabricated using FDM process have mechanical properties comparable with those of injection-molded parts. However, performance of the FDM part is severally affected by the poor mechanical properties of the part due to nature of layered structure of printed part. Mechanical properties of the part can be improved by proper selection of process variables. In the present study, a comparative study between unidirectional and bidirectional raster angle has been carried out at a combination of different layer height and raster width. Unidirectional raster angle varied at five different levels, and bidirectional raster angle has been varied at three different levels. Fabrication of tensile specimen and tensile testing of specimen has been conducted according to ASTM D638 standard. From the results, it can be observed that higher tensile strength has been obtained at 0° raster angle followed by 45°/45° raster angle, while lower tensile strength has been obtained at 90° raster angle. Analysis of fractured surface revealed that failure takes place along with raster deposition direction for unidirectional and zigzag failure can be observed for bidirectional raster angle.
Keywords: Additive manufacturing, fused deposition modeling, raster angle, tensile strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19183633 A Fuzzy Logic Based Model to Predict Surface Roughness of A Machined Surface in Glass Milling Operation Using CBN Grinding Tool
Authors: Ahmed A. D. Sarhan, M. Sayuti, M. Hamdi
Abstract:
Nowadays, the demand for high product quality focuses extensive attention to the quality of machined surface. The (CNC) milling machine facilities provides a wide variety of parameters set-up, making the machining process on the glass excellent in manufacturing complicated special products compared to other machining processes. However, the application of grinding process on the CNC milling machine could be an ideal solution to improve the product quality, but adopting the right machining parameters is required. In glass milling operation, several machining parameters are considered to be significant in affecting surface roughness. These parameters include the lubrication pressure, spindle speed, feed rate and depth of cut. In this research work, a fuzzy logic model is offered to predict the surface roughness of a machined surface in glass milling operation using CBN grinding tool. Four membership functions are allocated to be connected with each input of the model. The predicted results achieved via fuzzy logic model are compared to the experimental result. The result demonstrated settlement between the fuzzy model and experimental results with the 93.103% accuracy.Keywords: CNC-machine, Glass milling, Grinding, Surface roughness, Cutting force, Fuzzy logic model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26603632 Heat Treatment of Aluminum Alloy 7449
Authors: Suleiman E. Al-lubani, Mohammad E. Matarneh, Hussien M. Al-Wedyan, Ala M. Rayes
Abstract:
Aluminum alloy has an extensive range of industrial application due to its consistent mechanical properties and structural integrity. The heat treatment by precipitation technique affected the Magnesium, Silicon Manganese and copper crystals dissolved in the Aluminum alloy. The crystals dislocated to precipitate on the crystal’s boundaries of the Aluminum alloy when given a thermal energy increased its hardness. In this project various times and temperature were varied to find out the best combination of these variables to increase the precipitation of the metals on the Aluminum crystal’s boundaries which will lead to get the highest hardness. These specimens are then tested for their hardness and tensile strength. It is noticed that when the temperature increases, the precipitation increases and consequently the hardness increases. A threshold temperature value (264C0) of Aluminum alloy should not be reached due to the occurrence of recrystalization which causes the crystal to grow. This recrystalization process affected the ductility of the alloy and decrease hardness. In addition, and while increasing the temperature the alloy’s mechanical properties will decrease. The mechanical properties, namely tensile and hardness properties are investigated according to standard procedures. In this research, different temperature and time have been applied to increase hardening.The highest hardness at 100°c in 6 hours equals to 207.31 HBR, while at the same temperature and time the lowest elongation equals to 146.5.Keywords: Aluminum alloy, recrystalization process, heat treatment, hardness properties, precipitation, intergranular breakage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40763631 An Empirical Evaluation of Performance of Machine Learning Techniques on Imbalanced Software Quality Data
Authors: Ruchika Malhotra, Megha Khanna
Abstract:
The development of change prediction models can help the software practitioners in planning testing and inspection resources at early phases of software development. However, a major challenge faced during the training process of any classification model is the imbalanced nature of the software quality data. A data with very few minority outcome categories leads to inefficient learning process and a classification model developed from the imbalanced data generally does not predict these minority categories correctly. Thus, for a given dataset, a minority of classes may be change prone whereas a majority of classes may be non-change prone. This study explores various alternatives for adeptly handling the imbalanced software quality data using different sampling methods and effective MetaCost learners. The study also analyzes and justifies the use of different performance metrics while dealing with the imbalanced data. In order to empirically validate different alternatives, the study uses change data from three application packages of open-source Android data set and evaluates the performance of six different machine learning techniques. The results of the study indicate extensive improvement in the performance of the classification models when using resampling method and robust performance measures.Keywords: Change proneness, empirical validation, imbalanced learning, machine learning techniques, object-oriented metrics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15203630 A Tuning Method for Microwave Filter via Complex Neural Network and Improved Space Mapping
Authors: Shengbiao Wu, Weihua Cao, Min Wu, Can Liu
Abstract:
This paper presents an intelligent tuning method of microwave filter based on complex neural network and improved space mapping. The tuning process consists of two stages: the initial tuning and the fine tuning. At the beginning of the tuning, the return loss of the filter is transferred to the passband via the error of phase. During the fine tuning, the phase shift caused by the transmission line and the higher order mode is removed by the curve fitting. Then, an Cauchy method based on the admittance parameter (Y-parameter) is used to extract the coupling matrix. The influence of the resonant cavity loss is eliminated during the parameter extraction process. By using processed data pairs (the amount of screw variation and the variation of the coupling matrix), a tuning model is established by the complex neural network. In view of the improved space mapping algorithm, the mapping relationship between the actual model and the ideal model is established, and the amplitude and direction of the tuning is constantly updated. Finally, the tuning experiment of the eight order coaxial cavity filter shows that the proposed method has a good effect in tuning time and tuning precision.Keywords: Microwave filter, scattering parameter (s-parameter), coupling matrix, intelligent tuning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13153629 Drought Resilient Water Supply: Establishment of Groundwater Treatment Plant at Construction Sites in Taichung City
Authors: Shang-Hsin Ou, Yang-Chun Lin, Ke-Hao Cheng
Abstract:
The year 2021 marked a historic drought in Taiwan, posing unprecedented challenges due to record-low rainfall and inadequate reservoir storage. The central region experienced water scarcity, leading to the implementation of "Groundwater Utilization at Construction Sites" for drought-resilient livelihood water supply. This study focuses on the establishment process of temporary groundwater treatment plants at construction sites in Taichung City, serving as a reference for future emergency response and the utilization of construction site groundwater. To identify suitable sites for groundwater reuse projects, site selection operations were carried out based on relevant water quality regulations and assessment principles. Subsequently, the planning and design of temporary water treatment plants were conducted, considering the water quality, quantity, and on-site conditions of groundwater wells associated with construction projects. The study consolidates the major water treatment facilities at each site and addresses encountered challenges during the establishment process. Practical insights gained from operating temporary groundwater treatment plants are presented, including improvements related to stable water quality, water quantity, equipment operation, and hydraulic control. In light of possible future droughts, this study provides an outlook and recommendations to expedite and improve the setup of groundwater treatment plants at construction sites. This includes considering on-site water abstraction, treatment, and distribution conditions. The study aims to provide concise guidelines for setting up and managing temporary groundwater treatment plants at construction sites, drawing insights from Taichung City's establishment process. It offers recommendations for addressing challenges like water quality, quantity, equipment operation, and regulation compliance. By sharing these insights, it aims to aid regions facing similar emergencies, ensuring sustainable water supply and societal stability amidst water shortages and droughts.
Keywords: Drought resilience, groundwater treatment, construction site, water supply.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 713628 Hybrid Authentication System Using QR Code with OTP
Authors: Salim Istyaq
Abstract:
As we know, number of Internet users are increasing drastically. Now, people are using different online services provided by banks, colleges/schools, hospitals, online utility, bill payment and online shopping sites. To access online services, text-based authentication system is in use. The text-based authentication scheme faces some drawbacks with usability and security issues that bring troubles to users. The core element of computational trust is identity. The aim of the paper is to make the system more compliable for the imposters and more reliable for the users, by using the graphical authentication approach. In this paper, we are using the more powerful tool of encoding the options in graphical QR format and also there will be the acknowledgment which will send to the user’s mobile for final verification. The main methodology depends upon the encryption option and final verification by confirming a set of pass phrase on the legal users, the outcome of the result is very powerful as it only gives the result at once when the process is successfully done. All processes are cross linked serially as the output of the 1st process, is the input of the 2nd and so on. The system is a combination of recognition and pure recall based technique. Presented scheme is useful for devices like PDAs, iPod, phone etc. which are more handy and convenient to use than traditional desktop computer systems.
Keywords: Graphical Password, OTP, QR Codes, Recognition based graphical user authentication, usability and security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16613627 Development of an Indoor Drone Designed for the Needs of the Creative Industries
Authors: V. Santamarina Campos, M. de Miguel Molina, S. Kröner, B. de Miguel Molina
Abstract:
With this contribution, we want to show how the AiRT system could change the future way of working of a part of the creative industry and what new economic opportunities could arise for them. Remotely Piloted Aircraft Systems (RPAS), also more commonly known as drones, are now essential tools used by many different companies for their creative outdoor work. However, using this very flexible applicable tool indoor is almost impossible, since safe navigation cannot be guaranteed by the operator due to the lack of a reliable and affordable indoor positioning system which ensures a stable flight, among other issues. Here we present our first results of a European project, which consists of developing an indoor drone for professional footage especially designed for the creative industries. One of the main achievements of this project is the successful implication of the end-users in the overall design process from the very beginning. To ensure safe flight in confined spaces, our drone incorporates a positioning system based on ultra-wide band technology, an RGB-D (depth) camera for 3D environment reconstruction and the possibility to fully pre-program automatic flights. Since we also want to offer this tool for inexperienced pilots, we have always focused on user-friendly handling of the whole system throughout the entire process.
Keywords: Virtual reality, 3D reconstruction, indoor positioning system, UWB, RPAS, aerial film, intelligent navigation, advanced safety measures, creative industries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9053626 Classifier Based Text Mining for Neural Network
Authors: M. Govindarajan, R. M. Chandrasekaran
Abstract:
Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.Keywords: Back propagation, classification accuracy, textmining, time complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42183625 A New Intelligent, Dynamic and Real Time Management System of Sewerage
Authors: R. Tlili Yaakoubi, H. Nakouri, O. Blanpain, S. Lallahem
Abstract:
The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 19 to 100 %. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 40 % of total volume rejected to the natural environment and of 65 % in the number of discharges.Keywords: Automation, optimization, paradigm, RTC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14883624 Evaluation of Numerical Modeling of Jet Grouting Design Using in situ Loading Test
Authors: Reza Ziaie Moayed, Ehsan Azini
Abstract:
Jet grouting (JG) is one of the methods of improving and increasing the strength and bearing of soil in which the high pressure water or grout is injected through the nozzles into the soil. During this process, a part of the soil and grout particles comes out of the drill borehole, and the other part is mixed up with the grout in place, as a result of this process, a mass of modified soil is created. The purpose of this method is to change the soil into a mixture of soil and cement, commonly known as "soil-cement". In this paper, first, the principles of high pressure injection and then the effective parameters in the JG method are described. Then, the tests on the samples taken from the columns formed from the excavation around the soil-cement columns, as well as the static loading test on the created column, are discussed. In the other part of this paper, the soil behavior models for numerical modeling in PLAXIS software are mentioned. The purpose of this paper is to evaluate the results of numerical modeling based on in-situ static loading tests. The results indicate an acceptable agreement between the results of the tests mentioned and the modeling results. Also, modeling with this software as an appropriate option for technical feasibility can be used to soil improvement using JG.
Keywords: Jet grouting column, Soil improvement, Numerical modeling, In-situ loading test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10373623 Study of Natural Patterns on Digital Image Correlation Using Simulation Method
Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish
Abstract:
Digital image correlation (DIC) is a contactless fullfield displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.
Keywords: Digital image correlation (DIC), Deformation simulation, Natural pattern, Subset size.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27993622 Trajectory Tracking of a Redundant Hybrid Manipulator Using a Switching Control Method
Authors: Atilla Bayram
Abstract:
This paper presents the trajectory tracking control of a spatial redundant hybrid manipulator. This manipulator consists of two parallel manipulators which are a variable geometry truss (VGT) module. In fact, each VGT module with 3-degress of freedom (DOF) is a planar parallel manipulator and their operational planes of these VGT modules are arranged to be orthogonal to each other. Also, the manipulator contains a twist motion part attached to the top of the second VGT module to supply the missing orientation of the endeffector. These three modules constitute totally 7-DOF hybrid (parallel-parallel) redundant spatial manipulator. The forward kinematics equations of this manipulator are obtained, then, according to these equations, the inverse kinematics is solved based on an optimization with the joint limit avoidance. The dynamic equations are formed by using virtual work method. In order to test the performance of the redundant manipulator and the controllers presented, two different desired trajectories are followed by using the computed force control method and a switching control method. The switching control method is combined with the computed force control method and genetic algorithm. In the switching control method, the genetic algorithm is only used for fine tuning in the compensation of the trajectory tracking errors.Keywords: Computed force control method, genetic algorithm, hybrid manipulator, inverse kinematics of redundant manipulators, variable geometry truss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15733621 Incorporating Semantic Similarity Measure in Genetic Algorithm : An Approach for Searching the Gene Ontology Terms
Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias, Hany T. Alashwal, Rohayanti Hassan, FarhanMohamed
Abstract:
The most important property of the Gene Ontology is the terms. These control vocabularies are defined to provide consistent descriptions of gene products that are shareable and computationally accessible by humans, software agent, or other machine-readable meta-data. Each term is associated with information such as definition, synonyms, database references, amino acid sequences, and relationships to other terms. This information has made the Gene Ontology broadly applied in microarray and proteomic analysis. However, the process of searching the terms is still carried out using traditional approach which is based on keyword matching. The weaknesses of this approach are: ignoring semantic relationships between terms, and highly depending on a specialist to find similar terms. Therefore, this study combines semantic similarity measure and genetic algorithm to perform a better retrieval process for searching semantically similar terms. The semantic similarity measure is used to compute similitude strength between two terms. Then, the genetic algorithm is employed to perform batch retrievals and to handle the situation of the large search space of the Gene Ontology graph. The computational results are presented to show the effectiveness of the proposed algorithm.Keywords: Gene Ontology, Semantic similarity measure, Genetic algorithm, Ontology search
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14903620 Correlation and Prediction of Biodiesel Density
Authors: Nieves M. C. Talavera-Prieto, Abel G. M. Ferreira, António T. G. Portugal, Rui J. Moreira, Jaime B. Santos
Abstract:
The knowledge of biodiesel density over large ranges of temperature and pressure is important for predicting the behavior of fuel injection and combustion systems in diesel engines, and for the optimization of such systems. In this study, cottonseed oil was transesterified into biodiesel and its density was measured at temperatures between 288 K and 358 K and pressures between 0.1 MPa and 30 MPa, with expanded uncertainty estimated as ±1.6 kg⋅m- 3. Experimental pressure-volume-temperature (pVT) cottonseed data was used along with literature data relative to other 18 biodiesels, in order to build a database used to test the correlation of density with temperarure and pressure using the Goharshadi–Morsali–Abbaspour equation of state (GMA EoS). To our knowledge, this is the first that density measurements are presented for cottonseed biodiesel under such high pressures, and the GMA EoS used to model biodiesel density. The new tested EoS allowed correlations within 0.2 kg·m-3 corresponding to average relative deviations within 0.02%. The built database was used to develop and test a new full predictive model derived from the observed linear relation between density and degree of unsaturation (DU), which depended from biodiesel FAMEs profile. The average density deviation of this method was only about 3 kg.m-3 within the temperature and pressure limits of application. These results represent appreciable improvements in the context of density prediction at high pressure when compared with other equations of state.
Keywords: Biodiesel, Correlation, Density, Equation of state, Prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3511