Search results for: Optimization Algorithms
87 Friction Stir Welded Joint Aluminum Alloy H20-H20 with Different Type of Tools Mechanical Properties
Authors: Omid A. Zargar
Abstract:
In this project three type of tools, straight cylindrical, taper cylindrical and triangular tool all made of High speed steel (Wc-Co) used for the friction stir welding (FSW) aluminum alloy H20–H20 and the mechanical properties of the welded joint tested by tensile test and vicker hardness test. Besides, mentioned mechanical properties compared with each other to make conclusion. The result helped design of welding parameter optimization for different types of friction stir process like rotational speed, depth of welding, travel speed, type of material, type of joint, work piece dimension, joint dimension, tool material and tool geometry. Previous investigations in different types of materials work pieces; joint type, machining parameter and preheating temperature take placed. In this investigation 3 mentioned tool types that are popular in FSW tested and the results completed other aspects of the process. Hope this paper can open a new horizon in experimental investigation of mechanical properties for friction stir welded joint with other different type of tools like oval shape probe, paddle shape probe, three flat sided probe, and three sided re-entrant probe and other materials and alloys like titanium or steel in near future.
Keywords: Friction stir welding (FSW), tool, CNC milling machine, aluminum alloy H20, Vickers hardness test, tensile test, straight cylindrical tool, taper cylindrical tool, triangular tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 286686 Acceleration-Based Motion Model for Visual SLAM
Authors: Daohong Yang, Xiang Zhang, Wanting Zhou, Lei Li
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that gathers information about the surrounding environment to ascertain its own position and create a map. It is widely used in computer vision, robotics, and various other fields. Many visual SLAM systems, such as OBSLAM3, utilize a constant velocity motion model. The utilization of this model facilitates the determination of the initial pose of the current frame, thereby enhancing the efficiency and precision of feature matching. However, it is often difficult to satisfy the constant velocity motion model in actual situations. This can result in a significant deviation between the obtained initial pose and the true value, leading to errors in nonlinear optimization results. Therefore, this paper proposes a motion model based on acceleration that can be applied to most SLAM systems. To provide a more accurate description of the camera pose acceleration, we separate the pose transformation matrix into its rotation matrix and translation vector components. The rotation matrix is now represented by a rotation vector. We assume that, over a short period, the changes in rotating angular velocity and translation vector remain constant. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of the constant velocity model is analyzed theoretically. Finally, we apply our proposed approach to the ORBSLAM3 system and evaluate two sets of sequences from the TUM datasets. The results show that our proposed method has a more accurate initial pose estimation, resulting in an improvement of 6.61% and 6.46% in the accuracy of the ORBSLAM3 system on the two test sequences, respectively.
Keywords: Error estimation, constant acceleration motion model, pose estimation, visual SLAM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25285 Explicit Solution of an Investment Plan for a DC Pension Scheme with Voluntary Contributions and Return Clause under Logarithm Utility
Authors: Promise A. Azor, Avievie Igodo, Esabai M. Ase
Abstract:
The paper merged the return of premium clause and voluntary contributions to investigate retirees’ investment plan in a defined contributory (DC) pension scheme with a portfolio comprising of a risk-free asset and a risky asset whose price process is described by geometric Brownian motion (GBM). The paper considers additional voluntary contributions paid by members, charge on balance by pension fund administrators and the mortality risk of members of the scheme during the accumulation period by introducing return of premium clause. To achieve this, the Weilbull mortality force function is used to establish the mortality rate of members during accumulation phase. Furthermore, an optimization problem from the Hamilton Jacobi Bellman (HJB) equation is obtained using dynamic programming approach. Also, the Legendre transformation method is used to transform the HJB equation which is a nonlinear partial differential equation to a linear partial differential equation and solves the resultant equation for the value function and the optimal distribution plan under logarithm utility function. Finally, numerical simulations of the impact of some important parameters on the optimal distribution plan were obtained and it was observed that the optimal distribution plan is inversely proportional to the initial fund size, predetermined interest rate, additional voluntary contributions, charge on balance and instantaneous volatility.
Keywords: Legendre transform, logarithm utility, optimal distribution plan, return clause of premium, charge on balance, Weibull mortality function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20884 Hand Gesture Detection via EmguCV Canny Pruning
Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae
Abstract:
Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.
Keywords: Canny pruning, hand recognition, machine learning, skin tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 130983 Optimization of a Bioremediation Strategy for an Urban Stream of Matanza-Riachuelo Basin
Authors: María D. Groppa, Andrea Trentini, Myriam Zawoznik, Roxana Bigi, Carlos Nadra, Patricia L. Marconi
Abstract:
In the present work, a remediation bioprocess based on the use of a local isolate of the microalgae Chlorella vulgaris immobilized in alginate beads is proposed. This process was shown to be effective for the reduction of several chemical and microbial contaminants present in Cildáñez stream, a water course that is part of the Matanza-Riachuelo Basin (Buenos Aires, Argentina). The bioprocess, involving the culture of the microalga in autotrophic conditions in a stirred-tank bioreactor supplied with a marine propeller for 6 days, allowed a significant reduction of Escherichia coli and total coliform numbers (over 95%), as well as of ammoniacal nitrogen (96%), nitrates (86%), nitrites (98%), and total phosphorus (53%) contents. Pb content was also significantly diminished after the bioprocess (95%). Standardized cytotoxicity tests using Allium cepa seeds and Cildáñez water pre- and post-remediation were also performed. Germination rate and mitotic index of onion seeds imbibed in Cildáñez water subjected to the bioprocess was similar to that observed in seeds imbibed in distilled water and significantly superior to that registered when untreated Cildáñez water was used for imbibition. Our results demonstrate the potential of this simple and cost-effective technology to remove urban-water contaminants, offering as an additional advantage the possibility of an easy biomass recovery, which may become a source of alternative energy.
Keywords: Bioreactor, bioremediation, Chlorella vulgaris, Matanza-Riachuelo basin, microalgae.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84582 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees
Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel
Abstract:
Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.
Keywords: Cloud storage, decision trees, diagnostic image, search, telemedicine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 94881 Integrated Modeling of Transformation of Electricity and Transportation Sectors: A Case Study of Australia
Authors: T. Aboumahboub, R. Brecha, H. B. Shrestha, U. F. Hutfilter, A. Geiges, W. Hare, M. Schaeffer, L. Welder, M. Gidden
Abstract:
The proposed stringent mitigation targets require an immediate start for a drastic transformation of the whole energy system. The current Australian energy system is mainly centralized and fossil fuel-based in most states with coal and gas-fired plants dominating the total produced electricity over the recent past. On the other hand, the country is characterized by a huge, untapped renewable potential, where wind and solar energy could play a key role in the decarbonization of the Australia’s future energy system. However, integrating high shares of such variable renewable energy sources (VRES) challenges the power system considerably due to their temporal fluctuations and geographical dispersion. This raises the concerns about flexibility gap in the system to ensure the security of supply with increasing shares of such intermittent sources. One main flexibility dimension to facilitate system integration of high shares of VRES is to increase the cross-sectoral integration through coupling of electricity to other energy sectors alongside the decarbonization of the power sector and reinforcement of the transmission grid. This paper applies a multi-sectoral energy system optimization model for Australia. We investigate the cost-optimal configuration of a renewable-based Australian energy system and its transformation pathway in line with the ambitious range of proposed climate change mitigation targets. We particularly analyse the implications of linking the electricity and transport sectors in a prospective, highly renewable Australian energy system.
Keywords: Decarbonization, energy system modeling, sector coupling, variable renewable energies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59480 Improved Dynamic Bayesian Networks Applied to Arabic on Line Characters Recognition
Authors: Redouane Tlemsani, Abdelkader Benyettou
Abstract:
Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology.
This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data.
Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables.
In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization.
The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%.
Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178179 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method
Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky
Abstract:
It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finiteelements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.
Keywords: Finite elements method, modeling, expected welding deformations, welding, assembling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175678 Influence of Local Soil Conditions on Optimal Load Factors for Seismic Design of Buildings
Authors: Miguel A. Orellana, Sonia E. Ruiz, Juan Bojórquez
Abstract:
Optimal load factors (dead, live and seismic) used for the design of buildings may be different, depending of the seismic ground motion characteristics to which they are subjected, which are closely related to the type of soil conditions where the structures are located. The influence of the type of soil on those load factors, is analyzed in the present study. A methodology that is useful for establishing optimal load factors that minimize the cost over the life cycle of the structure is employed; and as a restriction, it is established that the probability of structural failure must be less than or equal to a prescribed value. The life-cycle cost model used here includes different types of costs. The optimization methodology is applied to two groups of reinforced concrete buildings. One set (consisting on 4-, 7-, and 10-story buildings) is located on firm ground (with a dominant period Ts=0.5 s) and the other (consisting on 6-, 12-, and 16-story buildings) on soft soil (Ts=1.5 s) of Mexico City. Each group of buildings is designed using different combinations of load factors. The statistics of the maximums inter-story drifts (associated with the structural capacity) are found by means of incremental dynamic analyses. The buildings located on firm zone are analyzed under the action of 10 strong seismic records, and those on soft zone, under 13 strong ground motions. All the motions correspond to seismic subduction events with magnitudes M=6.9. Then, the structural damage and the expected total costs, corresponding to each group of buildings, are estimated. It is concluded that the optimal load factors combination is different for the design of buildings located on firm ground than that for buildings located on soft soil.
Keywords: Life-cycle cost, optimal load factors, reinforced concrete buildings, total costs, type of soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90077 Statistical Analysis and Optimization of a Process for CO2 Capture
Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi
Abstract:
CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.
Keywords: Bubble column reactor, CO2 capture, Response Surface Methodology, water desalination.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184476 Thermal Analysis of a Transport Refrigeration Power Pack Unit Using a Coupled 1D/3D Simulation Approach
Authors: A. Kospach, A. Mladek, M. Waltenberger, F. Schilling
Abstract:
In this work, a coupled 1D/3D simulation approach for thermal protection and optimization of a trailer refrigeration power pack unit was developed. With the developed 1D/3D simulation approach thermal critical scenarios, such as summer, high-load scenarios are investigated. The 1D thermal model was built up consisting of the thermal network, which includes different point masses and associated heat transfers, the coolant and oil circuits, as well as the fan unit. The 3D computational fluid dynamics (CFD) model was developed to model the air flow through the power pack unit considering convective heat transfer effects. In the 1D thermal model the temperatures of the individual point masses were calculated, which served as input variables for the 3D CFD model. For the calculation of the point mass temperatures in the 1D thermal model, the convective heat transfer rates from the 3D CFD model were required as input variables. These two variables (point mass temperatures and convective heat transfer rates) were the main couple variables for the coupled 1D/3D simulation model. The coupled 1D/3D model was validated with measurements under normal operating conditions. Coupled simulations for summer high-load case were than performed and compared with a reference case under normal operation conditions. Hot temperature regions and components could be identified. Due to the detailed information about the flow field, temperatures and heat fluxes, it was possible to directly derive improvement suggestions for the cooling design of the transport refrigeration power pack unit.
Keywords: Coupled thermal simulation, thermal analysis, transport refrigeration unit, 3D computational fluid dynamics, 1D thermal modelling, thermal management systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20675 Optimization and Validation for Determination of VOCs from Lime Fruit Citrus aurantifolia (Christm.) with and without California Red Scale Aonidiella aurantii (Maskell) Infested by Using HS-SPME-GC-FID/MS
Authors: K. Mohammed, M. Agarwal, J. Mewman, Y. Ren
Abstract:
An optimum technic has been developed for extracting volatile organic compounds which contribute to the aroma of lime fruit (Citrus aurantifolia). The volatile organic compounds of healthy and infested lime fruit with California red scale Aonidiella aurantii were characterized using headspace solid phase microextraction (HS-SPME) combined with gas chromatography (GC) coupled flame ionization detection (FID) and gas chromatography with mass spectrometry (GC-MS) as a very simple, efficient and nondestructive extraction method. A three-phase 50/30 μm PDV/DVB/CAR fibre was used for the extraction process. The optimal sealing and fibre exposure time for volatiles reaching equilibrium from whole lime fruit in the headspace of the chamber was 16 and 4 hours respectively. 5 min was selected as desorption time of the three-phase fibre. Herbivorous activity induces indirect plant defenses, as the emission of herbivorous-induced plant volatiles (HIPVs), which could be used by natural enemies for host location. GC-MS analysis showed qualitative differences among volatiles emitted by infested and healthy lime fruit. The GC-MS analysis allowed the initial identification of 18 compounds, with similarities higher than 85%, in accordance with the NIST mass spectral library. One of these were increased by A. aurantii infestation, D-limonene, and three were decreased, Undecane, α-Farnesene and 7-epi-α-selinene. From an applied point of view, the application of the above-mentioned VOCs may help boost the efficiency of biocontrol programs and natural enemies’ production techniques.
Keywords: Lime fruit, Citrus aurantifolia, California red scale, Aonidiella aurantii, VOCs, HS-SPME/GC-FID-MS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 85974 Use of Corn Stover for the Production of 2G Bioethanol, Enzymes and Xylitol under a Biorefinery Concept
Authors: Astorga-Trejo Rebeca, Fonseca-Peralta Héctor Manuel, Beltrán-Arredondo Laura Ivonne, Castro-Martínez Claudia
Abstract:
The use of biomass as feedstock for the production of fuels and other chemicals of interest is an ever growing accepted option in the way to the development of biorefinery complexes. In the Mexican state of Sinaloa, a significant amount of residues from corn crops are produced every year, most of which can be converted to bioethanol and other products through biotechnological conversion using yeast and other microorganisms. Therefore, the objective of this work was to take advantage of corn stover and evaluate its potential as a substrate for the production of second generation bioethanol (2G), enzymes and xylitol. To produce bioethanol 2G, an acid-alkaline pretreatment was carried out prior to saccharification and fermentation. The microorganisms used for the production of enzymes, as well as for the production of xylitol, were isolated and characterized in our work group. Statistical analysis was performed using Design Expert version 11.0. The results showed that it is possible to obtain 2G bioethanol employing corn stover as a carbon source and Saccharomyces cerevisiae ItVer01 and Candida intermedia CBE002 with yields of 0.42 g and 0.31 g, respectively. It was also shown that C. intermedia has the ability to produce xylitol with a good yield (0.46 g/g). On the other hand, qualitative and quantitative studies showed that the native strains of Fusarium equiseti (0.4 IU/mL - xylanase), Bacillus velezensis (1.2 IU/mL – xylanase and 0.4 UI/mL - amylase) and Penicillium funiculosum (1.5 IU/mL - cellulases) have the capacity to produce xylanases, amylases or cellulases using corn stover as raw material. This study allowed us to demonstrate that it is possible to use corn stover as a carbon source, a low-cost raw material with high availability in our country, to obtain bioproducts of industrial interest, using processes that are more environmentally friendly and sustainable. It is necessary to continue the optimization of each bioprocess.
Keywords: Biomass, corn stover, biorefinery, bioethanol 2G, enzymes, xylitol.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47473 Numerical Simulation in the Air-Curtain Installed Subway Tunnel for the Indoor Air Quality
Authors: Kyung Jin Ryu, Makhsuda Juraeva, Sang-Hyun Jeong, Dong Joo Song
Abstract:
The Platform Screen Doors improve Indoor Air Quality (IAQ) in the subway station; however, and the air quality is degraded in the subway tunnel. CO2 concentration and indoor particulate matter value are high in the tunnel. The IAQ level in subway tunnel degrades by increasing the train movements. Air-curtain installation reduces dusts, particles and moving toxic smokes and permits traffic by generating virtual wall. The ventilation systems of the subway tunnel need improvements to have better air-quality. Numerical analyses might be effective tools analyze the flowfield inside the air-curtain installed subway tunnel. The ANSYS CFX software is used for steady computations of the airflow inside the tunnel. The single-track subway tunnel has the natural shaft, the mechanical shaft, and the PSDs installed stations. The height and width of the tunnel are 6.0 m and 4.0 m respectively. The tunnel is 400 m long and the air-curtain is installed at the top of the tunnel. The thickness and the width of the air-curtain are 0.08 m and 4 m respectively. The velocity of the air-curtain changes between 20 - 30 m/s. Three cases are analyzed depending on the installing location of the air-curtain. The discharged-air through the natural shafts increases as the velocity of the air-curtain increases when the air-curtain is installed between the mechanical and the natural shafts. The pollutant-air is exhausted by the mechanical and the natural shafts and remained air is pushed toward tunnel end. The discharged-air through the natural shaft is low when the air-curtain installed before the natural shaft. The mass flow rate decreases in the tunnel after the mechanical shaft as the air-curtain velocity increases. The computational results of the air-curtain installed tunnel become basis for the optimum design study. The air-curtain installing location is chosen between the mechanical and the natural shafts. The velocity of the air-curtain is fixed as 25 m/s. The thickness and the blowing angles of the air-curtain are the design variables for the optimum design study. The object function of the design optimization is maximizing the discharged air through the natural shaft.Keywords: air-curtain, indoor air quality, single-track subway tunnel
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 266072 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks
Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha
Abstract:
This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.
Keywords: 5G, millimetre wavebands, super high-frequency band, SINR, signal-to-interference-plus-noise ratio, cost benefit analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72271 Formulation and ex vivo Evaluation of Solid Lipid Nanoparticles (SLNS) Based Hydrogel for Intranasal Drug Delivery
Authors: Pramod Jagtap, Kisan Jadhav, Neha Dand
Abstract:
Risperidone (RISP) is an antipsychotic agent and has low water solubility and nontargeted delivery results in numerous side effects. Hence, an attempt was made to develop SLNs hydrogel for intranasal delivery of RISP to achieve maximum bioavailability and reduction of side effects. RISP loaded SLNs composed of 1.65% (w/v) lipid mass were produced by high shear homogenization (HSH) coupled ultrasound (US) method using glycerylmonostearate (GMS) or Imwitor 900K (solid lipid). The particles were loaded with 0.2% (w/v) of the RISP & surface-tailored with a 2.02% (w/v) non-ionic surfactant Tween® 80. Optimization was done using 32 factorial design using Design Expert® software. The prepared SLNs dispersion incorporated into Polycarbophil AA1 hydrogel (0.5% w/v). The final gel formulation was evaluated for entrapment efficiency, particle size, rheological properties, X ray diffraction, in vitro diffusion, ex vivo permeation using sheep nasal mucosa and histopathological studies for nasocilliary toxicity. The entrapment efficiency of optimized SLNs was found to be 76 ± 2%, polydispersity index <0.3., particle size 278 ± 5 nm. This optimized batch was incorporated into hydrogel. The pH was found to be 6.4 ± 0.14. The rheological behaviour of hydrogel formulation revealed no thixotropic behaviour. In histopathology study, there was no nasocilliary toxicity observed in nasal mucosa after ex vivo permeation. X-ray diffraction data shows drug was in amorphous form. Ex vivo permeation study shows controlled release profile of drug.
Keywords: Ex vivo, particle size, risperidone, solid lipid nanoparticles.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 346770 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity and aflatoxinogenic capacity of the strains, topography, soil and climate parameters of the fig orchards are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high-performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques i.e., dimensionality reduction on the original dataset (Principal Component Analysis), metric learning (Mahalanobis Metric for Clustering) and K-nearest Neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson Correlation Coefficient (PCC) between observed and predicted values.
Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64869 High Sensitivity Crack Detection and Locating with Optimized Spatial Wavelet Analysis
Authors: A. Ghanbari Mardasi, N. Wu, C. Wu
Abstract:
In this study, a spatial wavelet-based crack localization technique for a thick beam is presented. Wavelet scale in spatial wavelet transformation is optimized to enhance crack detection sensitivity. A windowing function is also employed to erase the edge effect of the wavelet transformation, which enables the method to detect and localize cracks near the beam/measurement boundaries. Theoretical model and vibration analysis considering the crack effect are first proposed and performed in MATLAB based on the Timoshenko beam model. Gabor wavelet family is applied to the beam vibration mode shapes derived from the theoretical beam model to magnify the crack effect so as to locate the crack. Relative wavelet coefficient is obtained for sensitivity analysis by comparing the coefficient values at different positions of the beam with the lowest value in the intact area of the beam. Afterward, the optimal wavelet scale corresponding to the highest relative wavelet coefficient at the crack position is obtained for each vibration mode, through numerical simulations. The same procedure is performed for cracks with different sizes and positions in order to find the optimal scale range for the Gabor wavelet family. Finally, Hanning window is applied to different vibration mode shapes in order to overcome the edge effect problem of wavelet transformation and its effect on the localization of crack close to the measurement boundaries. Comparison of the wavelet coefficients distribution of windowed and initial mode shapes demonstrates that window function eases the identification of the cracks close to the boundaries.
Keywords: Edge effect, scale optimization, small crack locating, spatial wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 94968 Hydraulic Optimization of an Adjustable Spiral-Shaped Evaporator
Authors: Matthias Feiner, Francisco Javier Fernández García, Michael Arneman, Martin Kipfmüller
Abstract:
To ensure reliability in miniaturized devices or processes with increased heat fluxes, very efficient cooling methods have to be employed in order to cope with small available cooling surfaces. To address this problem, a certain type of evaporator/heat exchanger was developed: It is called a swirl evaporator due to its flow characteristic. The swirl evaporator consists of a concentrically eroded screw geometry in which a capillary tube is guided, which is inserted into a pocket hole in components with high heat load. The liquid refrigerant R32 is sprayed through the capillary tube to the end face of the blind hole and is sucked off against the injection direction in the screw geometry. Its inner diameter is between one and three millimeters. The refrigerant is sprayed into the pocket hole via a small tube aligned in the center of the bore hole and is sucked off on the front side of the hole against the direction of injection. The refrigerant is sucked off in a helical geometry (twisted flow) so that it is accelerated against the hot wall (centrifugal acceleration). This results in an increase in the critical heat flux of up to 40%. In this way, more heat can be dissipated on the same surface/available installation space. This enables a wide range of technical applications. To optimize the design for the needs in various fields of industry, like the internal tool cooling when machining nickel base alloys like Inconel 718, a correlation-based model of the swirl-evaporator was developed. The model is separated into 3 subgroups with overall 5 regimes. The pressure drop and heat transfer are calculated separately. An approach to determine the locality of phase change in the capillary and the swirl was implemented. A test stand has been developed to verify the simulation.
Keywords: Helically-shaped, oil-free, R32, swirl-evaporator, twist flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47267 Stochastic Simulation of Reaction-Diffusion Systems
Authors: Paola Lecca, Lorenzo Dematte
Abstract:
Reactiondiffusion systems are mathematical models that describe how the concentration of one or more substances distributed in space changes under the influence of local chemical reactions in which the substances are converted into each other, and diffusion which causes the substances to spread out in space. The classical representation of a reaction-diffusion system is given by semi-linear parabolic partial differential equations, whose general form is ÔêétX(x, t) = DΔX(x, t), where X(x, t) is the state vector, D is the matrix of the diffusion coefficients and Δ is the Laplace operator. If the solute move in an homogeneous system in thermal equilibrium, the diffusion coefficients are constants that do not depend on the local concentration of solvent and of solutes and on local temperature of the medium. In this paper a new stochastic reaction-diffusion model in which the diffusion coefficients are function of the local concentration, viscosity and frictional forces of solvent and solute is presented. Such a model provides a more realistic description of the molecular kinetics in non-homogenoeus and highly structured media as the intra- and inter-cellular spaces. The movement of a molecule A from a region i to a region j of the space is described as a first order reaction Ai k- → Aj , where the rate constant k depends on the diffusion coefficient. Representing the diffusional motion as a chemical reaction allows to assimilate a reaction-diffusion system to a pure reaction system and to simulate it with Gillespie-inspired stochastic simulation algorithms. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the specific speed of reaction and diffusion events. Redi is the software tool, developed to implement the model of reaction-diffusion kinetics and dynamics. It is a free software, that can be downloaded from http://www.cosbi.eu. To demonstrate the validity of the new reaction-diffusion model, the simulation results of the chaperone-assisted protein folding in cytoplasm obtained with Redi are reported. This case study is redrawing the attention of the scientific community due to current interests on protein aggregation as a potential cause for neurodegenerative diseases.
Keywords: Reaction-diffusion systems, Fick's law, stochastic simulation algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 173766 Simulated Annealing Algorithm for Data Aggregation Trees in Wireless Sensor Networks and Comparison with Genetic Algorithm
Authors: Ladan Darougaran, Hossein Shahinzadeh, Hajar Ghotb, Leila Ramezanpour
Abstract:
In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.
Keywords: Data aggregation, wireless sensor networks, energy efficiency, simulated annealing algorithm, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168365 A New Distribution Network Reconfiguration Approach using a Tree Model
Authors: E. Dolatdar, S. Soleymani, B. Mozafari
Abstract:
Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.
Keywords: Distribution System, Reconfiguration, Loss Reduction , Graph Theory , Optimization , Genetic Algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 378264 Computational Feasibility Study of a Torsional Wave Transducer for Tissue Stiffness Monitoring
Authors: Rafael Muñoz, Juan Melchor, Alicia Valera, Laura Peralta, Guillermo Rus
Abstract:
A torsional piezoelectric ultrasonic transducer design is proposed to measure shear moduli in soft tissue with direct access availability, using shear wave elastography technique. The measurement of shear moduli of tissues is a challenging problem, mainly derived from a) the difficulty of isolating a pure shear wave, given the interference of multiple waves of different types (P, S, even guided) emitted by the transducers and reflected in geometric boundaries, and b) the highly attenuating nature of soft tissular materials. An immediate application, overcoming these drawbacks, is the measurement of changes in cervix stiffness to estimate the gestational age at delivery. The design has been optimized using a finite element model (FEM) and a semi-analytical estimator of the probability of detection (POD) to determine a suitable geometry, materials and generated waves. The technique is based on the time of flight measurement between emitter and receiver, to infer shear wave velocity. Current research is centered in prototype testing and validation. The geometric optimization of the transducer was able to annihilate the compressional wave emission, generating a quite pure shear torsional wave. Currently, mechanical and electromagnetic coupling between emitter and receiver signals are being the research focus. Conclusions: the design overcomes the main described problems. The almost pure shear torsional wave along with the short time of flight avoids the possibility of multiple wave interference. This short propagation distance reduce the effect of attenuation, and allow the emission of very low energies assuring a good biological security for human use.Keywords: Cervix ripening, preterm birth, shear modulus, shear wave elastography, soft tissue, torsional wave.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156763 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake
Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou
Abstract:
Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.Keywords: Landsat 8, oligotrophic lake, remote sensing, water quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 155562 Cold Flow Investigation of Primary Zone Characteristics in Combustor Utilizing Axial Air Swirler
Authors: Yehia A. Eldrainy, Mohammad Nazri Mohd. Jaafar, Tholudin Mat Lazim
Abstract:
This paper presents a cold flow simulation study of a small gas turbine combustor performed using laboratory scale test rig. The main objective of this investigation is to obtain physical insight of the main vortex, responsible for the efficient mixing of fuel and air. Such models are necessary for predictions and optimization of real gas turbine combustors. Air swirler can control the combustor performance by assisting in the fuel-air mixing process and by producing recirculation region which can act as flame holders and influences residence time. Thus, proper selection of a swirler is needed to enhance combustor performance and to reduce NOx emissions. Three different axial air swirlers were used based on their vane angles i.e., 30°, 45°, and 60°. Three-dimensional, viscous, turbulent, isothermal flow characteristics of the combustor model operating at room temperature were simulated via Reynolds- Averaged Navier-Stokes (RANS) code. The model geometry has been created using solid model, and the meshing has been done using GAMBIT preprocessing package. Finally, the solution and analysis were carried out in a FLUENT solver. This serves to demonstrate the capability of the code for design and analysis of real combustor. The effects of swirlers and mass flow rate were examined. Details of the complex flow structure such as vortices and recirculation zones were obtained by the simulation model. The computational model predicts a major recirculation zone in the central region immediately downstream of the fuel nozzle and a second recirculation zone in the upstream corner of the combustion chamber. It is also shown that swirler angles changes have significant effects on the combustor flowfield as well as pressure losses.
Keywords: cold flow, numerical simulation, combustor;turbulence, axial swirler.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 220461 Hydrogen Production at the Forecourt from Off-Peak Electricity and Its Role in Balancing the Grid
Authors: Abdulla Rahil, Rupert Gammon, Neil Brown
Abstract:
The rapid growth of renewable energy sources and their integration into the grid have been motivated by the depletion of fossil fuels and environmental issues. Unfortunately, the grid is unable to cope with the predicted growth of renewable energy which would lead to its instability. To solve this problem, energy storage devices could be used. Electrolytic hydrogen production from an electrolyser is considered a promising option since it is a clean energy source (zero emissions). Choosing flexible operation of an electrolyser (producing hydrogen during the off-peak electricity period and stopping at other times) could bring about many benefits like reducing the cost of hydrogen and helping to balance the electric systems. This paper investigates the price of hydrogen during flexible operation compared with continuous operation, while serving the customer (hydrogen filling station) without interruption. The optimization algorithm is applied to investigate the hydrogen station in both cases (flexible and continuous operation). Three different scenarios are tested to see whether the off-peak electricity price could enhance the reduction of the hydrogen cost. These scenarios are: Standard tariff (1 tier system) during the day (assumed 12 p/kWh) while still satisfying the demand for hydrogen; using off-peak electricity at a lower price (assumed 5 p/kWh) and shutting down the electrolyser at other times; using lower price electricity at off-peak times and high price electricity at other times. This study looks at Derna city, which is located on the coast of the Mediterranean Sea (32° 46′ 0 N, 22° 38′ 0 E) with a high potential for wind resource. Hourly wind speed data which were collected over 24½ years from 1990 to 2014 were in addition to data on hourly radiation and hourly electricity demand collected over a one-year period, together with the petrol station data.
Keywords: Hydrogen filling station off-peak electricity, renewable energy, off-peak electricity, electrolytic hydrogen.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 126360 Development of a Real-Time Simulink Based Robotic System to Study Force Feedback Mechanism during Instrument-Object Interaction
Authors: Jaydip M. Desai, Antonio Valdevit, Arthur Ritter
Abstract:
Robotic surgery is used to enhance minimally invasive surgical procedure. It provides greater degree of freedom for surgical tools but lacks of haptic feedback system to provide sense of touch to the surgeon. Surgical robots work on master-slave operation, where user is a master and robotic arms are the slaves. Current, surgical robots provide precise control of the surgical tools, but heavily rely on visual feedback, which sometimes cause damage to the inner organs. The goal of this research was to design and develop a realtime Simulink based robotic system to study force feedback mechanism during instrument-object interaction. Setup includes three VelmexXSlide assembly (XYZ Stage) for three dimensional movement, an end effector assembly for forceps, electronic circuit for four strain gages, two Novint Falcon 3D gaming controllers, microcontroller board with linear actuators, MATLAB and Simulink toolboxes. Strain gages were calibrated using Imada Digital Force Gauge device and tested with a hard-core wire to measure instrument-object interaction in the range of 0-35N. Designed Simulink model successfully acquires 3D coordinates from two Novint Falcon controllers and transfer coordinates to the XYZ stage and forceps. Simulink model also reads strain gages signal through 10-bit analog to digital converter resolution of a microcontroller assembly in real time, converts voltage into force and feedback the output signals to the Novint Falcon controller for force feedback mechanism. Experimental setup allows user to change forward kinematics algorithms to achieve the best-desired movement of the XYZ stage and forceps. This project combines haptic technology with surgical robot to provide sense of touch to the user controlling forceps through machine-computer interface.
Keywords: Haptic feedback, MATLAB, Simulink, Strain Gage, Surgical Robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 321259 Optimization of Samarium Extraction via Nanofluid-Based Emulsion Liquid Membrane Using Cyanex 272 as Mobile Carrier
Authors: Maliheh Raji, Hossein Abolghasemi, Jaber Safdari, Ali Kargari
Abstract:
Samarium as a rare-earth element is playing a growing important role in high technology. Traditional methods for extraction of rare earth metals such as ion exchange and solvent extraction have disadvantages of high investment and high energy consumption. Emulsion liquid membrane (ELM) as an improved solvent extraction technique is an effective transport method for separation of various compounds from aqueous solutions. In this work, the extraction of samarium from aqueous solutions by ELM was investigated using response surface methodology (RSM). The organic membrane phase of the ELM was a nanofluid consisted of multiwalled carbon nanotubes (MWCNT), Span80 as surfactant, Cyanex 272 as mobile carrier, and kerosene as base fluid. 1 M nitric acid solution was used as internal aqueous phase. The effects of the important process parameters on samarium extraction were investigated, and the values of these parameters were optimized using the Central Composition Design (CCD) of RSM. These parameters were the concentration of MWCNT in nanofluid, the carrier concentration, and the volume ratio of organic membrane phase to internal phase (Roi). The three-dimensional (3D) response surfaces of samarium extraction efficiency were obtained to visualize the individual and interactive effects of the process variables. A regression model for % extraction was developed, and its adequacy was evaluated. The result shows that % extraction improves by using MWCNT nanofluid in organic membrane phase and extraction efficiency of 98.92% can be achieved under the optimum conditions. In addition, demulsification was successfully performed and the recycled membrane phase was proved to be effective in the optimum condition.
Keywords: Cyanex 272, emulsion liquid membrane, multiwalled carbon nanotubes, nanofluid, response surface methodology, Samarium.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185758 Development of Manufacturing Simulation Model for Semiconductor Fabrication
Authors: Syahril Ridzuan Ab Rahim, Ibrahim Ahmad, Mohd Azizi Chik, Ahmad Zafir Md. Rejab, and U. Hashim
Abstract:
This research presents the development of simulation modeling for WIP management in semiconductor fabrication. Manufacturing simulation modeling is needed for productivity optimization analysis due to the complex process flows involved more than 35 percent re-entrance processing steps more than 15 times at same equipment. Furthermore, semiconductor fabrication required to produce high product mixed with total processing steps varies from 300 to 800 steps and cycle time between 30 to 70 days. Besides the complexity, expansive wafer cost that potentially impact the company profits margin once miss due date is another motivation to explore options to experiment any analysis using simulation modeling. In this paper, the simulation model is developed using existing commercial software platform AutoSched AP, with customized integration with Manufacturing Execution Systems (MES) and Advanced Productivity Family (APF) for data collections used to configure the model parameters and data source. Model parameters such as processing steps cycle time, equipment performance, handling time, efficiency of operator are collected through this customization. Once the parameters are validated, few customizations are made to ensure the prior model is executed. The accuracy for the simulation model is validated with the actual output per day for all equipments. The comparison analysis from result of the simulation model compared to actual for achieved 95 percent accuracy for 30 days. This model later was used to perform various what if analysis to understand impacts on cycle time and overall output. By using this simulation model, complex manufacturing environment like semiconductor fabrication (fab) now have alternative source of validation for any new requirements impact analysis.Keywords: Advanced Productivity Family (APF), Complementary Metal Oxide Semiconductor (CMOS), Manufacturing Execution Systems (MES), Work In Progress (WIP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3218