Search results for: Clustering technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7108

Search results for: Clustering technique

5458 Raman Tweezers Spectroscopy Study of Size Dependent Silver Nanoparticles Toxicity on Erythrocytes

Authors: Surekha Barkur, Aseefhali Bankapur, Santhosh Chidangil

Abstract:

Raman Tweezers technique has become prevalent in single cell studies. This technique combines Raman spectroscopy which gives information about molecular vibrations, with optical tweezers which use a tightly focused laser beam for trapping the single cells. Thus Raman Tweezers enabled researchers analyze single cells and explore different applications. The applications of Raman Tweezers include studying blood cells, monitoring blood-related disorders, silver nanoparticle-induced stress, etc. There is increased interest in the toxic effect of nanoparticles with an increase in the various applications of nanoparticles. The interaction of these nanoparticles with the cells may vary with their size. We have studied the effect of silver nanoparticles of sizes 10nm, 40nm, and 100nm on erythrocytes using Raman Tweezers technique. Our aim was to investigate the size dependence of the nanoparticle effect on RBCs. We used 785nm laser (Starbright Diode Laser, Torsana Laser Tech, Denmark) for both trapping and Raman spectroscopic studies. 100 x oil immersion objectives with high numerical aperture (NA 1.3) is used to focus the laser beam into a sample cell. The back-scattered light is collected using the same microscope objective and focused into the spectrometer (Horiba Jobin Vyon iHR320 with 1200grooves/mm grating blazed at 750nm). Liquid nitrogen cooled CCD (Symphony CCD-1024x256-OPEN-1LS) was used for signal detection. Blood was drawn from healthy volunteers in vacutainer tubes and centrifuged to separate the blood components. 1.5 ml of silver nanoparticles was washed twice with distilled water leaving 0.1 ml silver nanoparticles in the bottom of the vial. The concentration of silver nanoparticles is 0.02mg/ml so the 0.03mg of nanoparticles will be present in the 0.1 ml nanoparticles obtained. The 25 ul of RBCs were diluted in 2 ml of PBS solution and then treated with 50 ul (0.015mg) of nanoparticles and incubated in CO2 incubator. Raman spectroscopic measurements were done after 24 hours and 48 hours of incubation. All the spectra were recorded with 10mW laser power (785nm diode laser), 60s of accumulation time and 2 accumulations. Major changes were observed in the peaks 565 cm-1, 1211 cm-1, 1224 cm-1, 1371 cm-1, 1638 cm-1. A decrease in intensity of 565 cm-1, increase in 1211 cm-1 with a reduction in 1224 cm-1, increase in intensity of 1371 cm-1 also peak disappearing at 1635 cm-1 indicates deoxygenation of hemoglobin. Nanoparticles with higher size were showing maximum spectral changes. Lesser changes observed in case of 10nm nanoparticle-treated erythrocyte spectra.

Keywords: erythrocytes, nanoparticle-induced toxicity, Raman tweezers, silver nanoparticles

Procedia PDF Downloads 293
5457 Determination of Optimum Parameters for Thermal Stress Distribution in Composite Plate Containing a Triangular Cutout by Optimization Method

Authors: Mohammad Hossein Bayati Chaleshtari, Hadi Khoramishad

Abstract:

Minimizing the stress concentration around triangular cutout in infinite perforated plates subjected to a uniform heat flux induces thermal stresses is an important consideration in engineering design. Furthermore, understanding the effective parameters on stress concentration and proper selection of these parameters enables the designer to achieve a reliable design. In the analysis of thermal stress, the effective parameters on stress distribution around cutout include fiber angle, flux angle, bluntness and rotation angle of the cutout for orthotropic materials. This paper was tried to examine effect of these parameters on thermal stress analysis of infinite perforated plates with central triangular cutout. In order to achieve the least amount of thermal stress around a triangular cutout using a novel swarm intelligence optimization technique called dragonfly optimizer that inspired by the life method and hunting behavior of dragonfly in nature. In this study, using the two-dimensional thermoelastic theory and based on the Likhnitskiiʼ complex variable technique, the stress analysis of orthotropic infinite plate with a circular cutout under a uniform heat flux was developed to the plate containing a quasi-triangular cutout in thermal steady state condition. To achieve this goal, a conformal mapping function was used to map an infinite plate containing a quasi- triangular cutout into the outside of a unit circle. The plate is under uniform heat flux at infinity and Neumann boundary conditions and thermal-insulated condition at the edge of the cutout were considered.

Keywords: infinite perforated plate, complex variable method, thermal stress, optimization method

Procedia PDF Downloads 150
5456 Enhancing Traditional Saudi Designs Pattern Cutting to Integrate Them Into Current Clothing Offers

Authors: Faizah Almalki, Simeon Gill, Steve G. Hayes, Lisa Taylor

Abstract:

A core element of cultural identity is the traditional costumes that provide insight into the heritage that has been acquired over time. This heritage is apparent in the use of colour, the styles and the functions of the clothing and it also reflects the skills of those who created the items and the time taken to produce them. Modern flat pattern drafting methods for making garment patterns are simple in comparison to the relatively laborious traditional approaches that would require personal interaction with the wearer throughout the production process. The current study reflects on the main elements of the pattern cutting system and how this has evolved in Saudi Arabia to affect the design of the Sawan garment. Analysis of the traditional methods for constructing Sawan garments was undertaken through observation of the practice and the garments and consulting documented guidance. This provided a foundation through which to explore how modern technology can be applied to improve the process. In this research, modern methods are proposed for producing traditional Saudi garments more efficiently while retaining elements of the conventional style and design. The current study has documented the vital aspects of Sawan garment style. The result showed that the method had been used to take the body measurements and pattern making was elementary and offered simple geometric shape and the Sawan garment is composed of four pieces. Consequently, this research allows for classical pattern shapes to be embedded in garments now worn in Saudi Arabia and for the continuation of cultural heritage.

Keywords: traditional Sawan garment technique, modern pattern cutting technique, the shape of the garment and software, Lectra Modaris

Procedia PDF Downloads 133
5455 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics

Authors: Varun Kumar, Chandra Shakher

Abstract:

Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.

Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy

Procedia PDF Downloads 500
5454 Development of Medical Intelligent Process Model Using Ontology Based Technique

Authors: Emmanuel Chibuogu Asogwa, Tochukwu Sunday Belonwu

Abstract:

An urgent demand for creative solutions has been created by the rapid expansion of medical knowledge, the complexity of patient care, and the requirement for more precise decision-making. As a solution to this problem, the creation of a Medical Intelligent Process Model (MIPM) utilizing ontology-based appears as a promising way to overcome this obstacle and unleash the full potential of healthcare systems. The development of a Medical Intelligent Process Model (MIPM) using ontology-based techniques is motivated by a lack of quick access to relevant medical information and advanced tools for treatment planning and clinical decision-making, which ontology-based techniques can provide. The aim of this work is to develop a structured and knowledge-driven framework that leverages ontology, a formal representation of domain knowledge, to enhance various aspects of healthcare. Object-Oriented Analysis and Design Methodology (OOADM) were adopted in the design of the system as we desired to build a usable and evolvable application. For effective implementation of this work, we used the following materials/methods/tools: the medical dataset for the test of our model in this work was obtained from Kaggle. The ontology-based technique was used with Confusion Matrix, MySQL, Python, Hypertext Markup Language (HTML), Hypertext Preprocessor (PHP), Cascaded Style Sheet (CSS), JavaScript, Dreamweaver, and Fireworks. According to test results on the new system using Confusion Matrix, both the accuracy and overall effectiveness of the medical intelligent process significantly improved by 20% compared to the previous system. Therefore, using the model is recommended for healthcare professionals.

Keywords: ontology-based, model, database, OOADM, healthcare

Procedia PDF Downloads 79
5453 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options

Authors: Wajih Abbassi, Zouhaier Ben Khelifa

Abstract:

The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.

Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options

Procedia PDF Downloads 429
5452 Weight Estimation Using the K-Means Method in Steelmaking’s Overhead Cranes in Order to Reduce Swing Error

Authors: Seyedamir Makinejadsanij

Abstract:

One of the most important factors in the production of quality steel is to know the exact weight of steel in the steelmaking area. In this study, a calculation method is presented to estimate the exact weight of the melt as well as the objects transported by the overhead crane. Iran Alloy Steel Company's steelmaking area has three 90-ton cranes, which are responsible for transferring the ladles and ladle caps between 34 areas in the melt shop. Each crane is equipped with a Disomat Tersus weighing system that calculates and displays real-time weight. The moving object has a variable weight due to swinging, and the weighing system has an error of about +-5%. This means that when the object is moving by a crane, which weighs about 80 tons, the device (Disomat Tersus system) calculates about 4 tons more or 4 tons less, and this is the biggest problem in calculating a real weight. The k-means algorithm is an unsupervised clustering method that was used here. The best result was obtained by considering 3 centers. Compared to the normal average(one) or two, four, five, and six centers, the best answer is with 3 centers, which is logically due to the elimination of noise above and below the real weight. Every day, the standard weight is moved with working cranes to test and calibrate cranes. The results are shown that the accuracy is about 40 kilos per 60 tons (standard weight). As a result, with this method, the accuracy of moving weight is calculated as 99.95%. K-means is used to calculate the exact mean of objects. The stopping criterion of the algorithm is also the number of 1000 repetitions or not moving the points between the clusters. As a result of the implementation of this system, the crane operator does not stop while moving objects and continues his activity regardless of weight calculations. Also, production speed increased, and human error decreased.

Keywords: k-means, overhead crane, melt weight, weight estimation, swing problem

Procedia PDF Downloads 91
5451 Enhancement of Light Extraction of Luminescent Coating by Nanostructuring

Authors: Aubry Martin, Nehed Amara, Jeff Nyalosaso, Audrey Potdevin, FrançOis ReVeret, Michel Langlet, Genevieve Chadeyron

Abstract:

Energy-saving lighting devices based on LightEmitting Diodes (LEDs) combine a semiconductor chip emitting in the ultraviolet or blue wavelength region to one or more phosphor(s) deposited in the form of coatings. The most common ones combine a blue LED with the yellow phosphor Y₃Al₅O₁₂:Ce³⁺ (YAG:Ce) and a red phosphor. Even if these devices are characterized by satisfying photometric parameters (Color Rendering Index, Color Temperature) and good luminous efficiencies, further improvements can be carried out to enhance light extraction efficiency (increase in phosphor forward emission). One of the possible strategies is to pattern the phosphor coatings. Here, we have worked on different ways to nanostructure the coating surface. On the one hand, we used the colloidal lithography combined with the Langmuir-Blodgett technique to directly pattern the surface of YAG:Tb³⁺ sol-gel derived coatings, YAG:Tb³⁺ being used as phosphor model. On the other hand, we achieved composite architectures combining YAG:Ce coatings and ZnO nanowires. Structural, morphological and optical properties of both systems have been studied and compared to flat YAG coatings. In both cases, nanostructuring brought a significative enhancement of photoluminescence properties under UV or blue radiations. In particular, angle-resolved photoluminescence measurements have shown that nanostructuring modifies photons path within the coatings, with a better extraction of the guided modes. These two strategies have the advantage of being versatile and applicable to any phosphor synthesizable by sol-gel technique. They then appear as promising ways to enhancement luminescence efficiencies of both phosphor coatings and the optical devices into which they are incorporated, such as LED-based lighting or safety devices.

Keywords: phosphor coatings, nanostructuring, light extraction, ZnO nanowires, colloidal lithography, LED devices

Procedia PDF Downloads 177
5450 High-Risk Gene Variant Profiling Models Ethnic Disparities in Diabetes Vulnerability

Authors: Jianhua Zhang, Weiping Chen, Guanjie Chen, Jason Flannick, Emma Fikse, Glenda Smerin, Yanqin Yang, Yulong Li, John A. Hanover, William F. Simonds

Abstract:

Ethnic disparities in many diseases are well recognized and reflect the consequences of genetic, behavior, and environmental factors. However, direct scientific evidence connecting the ethnic genetic variations and the disease disparities has been elusive, which may have led to the ethnic inequalities in large scale genetic studies. Through the genome-wide analysis of data representing 185,934 subjects, including 14,955 from our own studies of the African America Diabetes Mellitus, we discovered sets of genetic variants either unique to or conserved in all ethnicities. We further developed a quantitative gene function-based high-risk variant index (hrVI) of 20,428 genes to establish profiles that strongly correlate with the subjects' self-identified ethnicities. With respect to the ability to detect human essential and pathogenic genes, the hrVI analysis method is both comparable with and complementary to the well-known genetic analysis methods, pLI and VIRlof. Application of the ethnicity-specific hrVI analysis to the type 2 diabetes mellitus (T2DM) national repository, containing 20,791 cases and 24,440 controls, identified 114 candidate T2DM-associated genes, 8.8-fold greater than that of ethnicity-blind analysis. All the genes identified are defined as either pathogenic or likely-pathogenic in ClinVar database, with 33.3% diabetes-associated and 54.4% obesity-associated genes. These results demonstrate the utility of hrVI analysis and provide the first genetic evidence by clustering patterns of how genetic variations among ethnicities may impede the discovery of diabetes and foreseeably other disease-associated genes.

Keywords: diabetes-associated genes, ethnic health disparities, high-risk variant index, hrVI, T2DM

Procedia PDF Downloads 137
5449 An Analysis of the Causes of SMEs Failure in Developing Countries: The Case of South Africa

Authors: Paul Saah, Charles Mbohwa, Nelson Sizwe Madonsela

Abstract:

In the context of developing countries, this study explores a crucial component of economic development by examining the reasons behind the failure of small and medium-sized enterprises (SMEs). SMEs are acknowledged as essential drivers of economic expansion, job creation, and poverty alleviation in emerging countries. This research uses South Africa as a case study to evaluate the reasons why SMEs fail in developing nations. This study explores a quantitative research methodology to investigate the complex causes of SME failures using statistical tools and reliability tests. To ensure the viability of data collection, a sample size of 400 small business owners was chosen using a non-probability selection technique. A closed-ended questionnaire was the primary technique used to obtain detailed information from the participants. Data was analysed and interpreted using computer software packages such as the Statistical Package for the Social Sciences (SPSS). According to the findings, the main reasons why SMEs fail in developing nations are a lack of strategic business planning, a lack of funding, poor management, a lack of innovation, a lack of business research and a low level of education and training. The results of this study show that SMEs can be sustainable and successful as long as they comprehend and use the suggested small business success determining variables into their daily operations. This implies that the more SMEs in developing countries implement the proposed determinant factors of small business success in their business operations the more the businesses are likely to succeed and vice versa.

Keywords: failure, developing countries, SMEs, economic development, South Africa

Procedia PDF Downloads 79
5448 Transcranial and Sacral Magnetic Stimulation as a Therapeutic Resource for Urinary Incontinence – A Brief Bibliographic Review

Authors: Ana Lucia Molina

Abstract:

Transcranial magnetic stimulation (TMS) is a non-invasive neuromodulation technique for the investigation and modulation of cortical excitability in humans. The modulation of the processing of different cortical areas can result in several areas for rehabilitation, showing great potential in the treatment of motor disorders. In the human brain, the supplementary motor area (SMA) is involved in the control of the pelvic floor muscles (MAP), where dysfunctions of these muscles can lead to urinary incontinence. Peripheral magnetic stimulation, specifically sacral magnetic stimulation, has been used as a safe and effective treatment option for patients with lower urinary tract dysfunction. A systematic literature review was carried out (Pubmed, Medline and Google academic database) without a time limit using the keywords: "transcranial magnetic stimulation", "sacral neuromodulation", and "urinary incontinence", where 11 articles attended to the inclusion criteria. Results: Thirteen articles were selected. Magnetic stimulation is a non-invasive neuromodulation technique widely used in the evaluation of cortical areas and their respective peripheral areas, as well as in the treatment of lesions of brain origin. With regard to pelvic-perineal disorders, repetitive transcranial stimulation showed significant effects in controlling urinary incontinence, as well as sacral peripheral magnetic stimulation, in addition to exerting the potential to restore bladder sphincter function. Conclusion: Data from the literature suggest that both transcranial stimulation and peripheral stimulation are non-invasive references that can be promising and effective means of treatment in pelvic and perineal disorders. More prospective and randomized studies on a larger scale are needed, adapting the most appropriate and resolving parameters.

Keywords: urinary incontinence, non-invasive neuromodulation, sacral neuromodulation, transcranial magnetic stimulation.

Procedia PDF Downloads 98
5447 Modeling Average Paths Traveled by Ferry Vessels Using AIS Data

Authors: Devin Simmons

Abstract:

At the USDOT’s Bureau of Transportation Statistics, a biannual census of ferry operators in the U.S. is conducted, with results such as route mileage used to determine federal funding levels for operators. AIS data allows for the possibility of using GIS software and geographical methods to confirm operator-reported mileage for individual ferry routes. As part of the USDOT’s work on the ferry census, an algorithm was developed that uses AIS data for ferry vessels in conjunction with known ferry terminal locations to model the average route travelled for use as both a cartographic product and confirmation of operator-reported mileage. AIS data from each vessel is first analyzed to determine individual journeys based on the vessel’s velocity, and changes in velocity over time. These trips are then converted to geographic linestring objects. Using the terminal locations, the algorithm then determines whether the trip represented a known ferry route. Given a large enough dataset, routes will be represented by multiple trip linestrings, which are then filtered by DBSCAN spatial clustering to remove outliers. Finally, these remaining trips are ready to be averaged into one route. The algorithm interpolates the point on each trip linestring that represents the start point. From these start points, a centroid is calculated, and the first point of the average route is determined. Each trip is interpolated again to find the point that represents one percent of the journey’s completion, and the centroid of those points is used as the next point in the average route, and so on until 100 points have been calculated. Routes created using this algorithm have shown demonstrable improvement over previous methods, which included the implementation of a LOESS model. Additionally, the algorithm greatly reduces the amount of manual digitizing needed to visualize ferry activity.

Keywords: ferry vessels, transportation, modeling, AIS data

Procedia PDF Downloads 178
5446 Quality Improvement of the Sand Moulding Process in Foundries Using Six Sigma Technique

Authors: Cindy Sithole, Didier Nyembwe, Peter Olubambi

Abstract:

The sand casting process involves pattern making, mould making, metal pouring and shake out. Every step in the sand moulding process is very critical for production of good quality castings. However, waste generated during the sand moulding operation and lack of quality are matters that influences performance inefficiencies and lack of competitiveness in South African foundries. Defects produced from the sand moulding process are only visible in the final product (casting) which results in increased number of scrap, reduced sales and increases cost in the foundry. The purpose of this Research is to propose six sigma technique (DMAIC, Define, Measure, Analyze, Improve and Control) intervention in sand moulding foundries and to reduce variation caused by deficiencies in the sand moulding process in South African foundries. Its objective is to create sustainability and enhance productivity in the South African foundry industry. Six sigma is a data driven method to process improvement that aims to eliminate variation in business processes using statistical control methods .Six sigma focuses on business performance improvement through quality initiative using the seven basic tools of quality by Ishikawa. The objectives of six sigma are to eliminate features that affects productivity, profit and meeting customers’ demands. Six sigma has become one of the most important tools/techniques for attaining competitive advantage. Competitive advantage for sand casting foundries in South Africa means improved plant maintenance processes, improved product quality and proper utilization of resources especially scarce resources. Defects such as sand inclusion, Flashes and sand burn on were some of the defects that were identified as resulting from the sand moulding process inefficiencies using six sigma technique. The courses were we found to be wrong design of the mould due to the pattern used and poor ramming of the moulding sand in a foundry. Six sigma tools such as the voice of customer, the Fishbone, the voice of the process and process mapping were used to define the problem in the foundry and to outline the critical to quality elements. The SIPOC (Supplier Input Process Output Customer) Diagram was also employed to ensure that the material and process parameters were achieved to ensure quality improvement in a foundry. The process capability of the sand moulding process was measured to understand the current performance to enable improvement. The Expected results of this research are; reduced sand moulding process variation, increased productivity and competitive advantage.

Keywords: defects, foundries, quality improvement, sand moulding, six sigma (DMAIC)

Procedia PDF Downloads 195
5445 Non-Linear Load-Deflection Response of Shape Memory Alloys-Reinforced Composite Cylindrical Shells under Uniform Radial Load

Authors: Behrang Tavousi Tehrani, Mohammad-Zaman Kabir

Abstract:

Shape memory alloys (SMA) are often implemented in smart structures as the active components. Their ability to recover large displacements has been used in many applications, including structural stability/response enhancement and active structural acoustic control. SMA wires or fibers can be embedded with composite cylinders to increase their critical buckling load, improve their load-deflection behavior, and reduce the radial deflections under various thermo-mechanical loadings. This paper presents a semi-analytical investigation on the non-linear load-deflection response of SMA-reinforced composite circular cylindrical shells. The cylinder shells are under uniform external pressure load. Based on first-order shear deformation shell theory (FSDT), the equilibrium equations of the structure are derived. One-dimensional simplified Brinson’s model is used for determining the SMA recovery force due to its simplicity and accuracy. Airy stress function and Galerkin technique are used to obtain non-linear load-deflection curves. The results are verified by comparing them with those in the literature. Several parametric studies are conducted in order to investigate the effect of SMA volume fraction, SMA pre-strain value, and SMA activation temperature on the response of the structure. It is shown that suitable usage of SMA wires results in a considerable enhancement in the load-deflection response of the shell due to the generation of the SMA tensile recovery force.

Keywords: airy stress function, cylindrical shell, Galerkin technique, load-deflection curve, recovery stress, shape memory alloy

Procedia PDF Downloads 190
5444 Microencapsulation of Phenobarbital by Ethyl Cellulose Matrix

Authors: S. Bouameur, S. Chirani

Abstract:

The aim of this study was to evaluate the potential use of EthylCellulose in the preparation of microspheres as a Drug Delivery System for sustained release of phenobarbital. The microspheres were prepared by solvent evaporation technique using ethylcellulose as polymer matrix with a ratio 1:2, dichloromethane as solvent and Polyvinyl alcohol 1% as processing medium to solidify the microspheres. Size, shape, drug loading capacity and entrapement efficiency were studied.

Keywords: phenobarbital, microspheres, ethylcellulose, polyvinylacohol

Procedia PDF Downloads 361
5443 The Use of Mobile Phone as Enhancement to Mark Multiple Choice Objectives English Grammar and Literature Examination: An Exploratory Case Study of Preliminary National Diploma Students, Abdu Gusau Polytechnic, Talata Mafara, Zamfara State, Nigeria

Authors: T. Abdulkadir

Abstract:

Most often, marking and assessment of multiple choice kinds of examinations have been opined by many as a cumbersome and herculean task to accomplished manually in Nigeria. Usually this may be in obvious nexus to the fact that mass numbers of candidates were known to take the same examination simultaneously. Eventually, marking such a mammoth number of booklets dared and dread even the fastest paid examiners who often undertake the job with the resulting consequences of stress and boredom. This paper explores the evolution, as well as the set aim to envision and transcend marking the Multiple Choice Objectives- type examination into a thing of creative recreation, or perhaps a more relaxing activity via the use of the mobile phone. A more “pragmatic” dimension method was employed to achieve this work, rather than the formal “in-depth research” based approach due to the “novelty” of the mobile-smartphone e-Marking Scheme discovery. Moreover, being an evolutionary scheme, no recent academic work shares a direct same topic concept with the ‘use of cell phone as an e-marking technique’ was found online; thus, the dearth of even miscellaneous citations in this work. Additional future advancements are what steered the anticipatory motive of this paper which laid the fundamental proposition. However, the paper introduces for the first time the concept of mobile-smart phone e-marking, the steps to achieve it, as well as the merits and demerits of the technique all spelt out in the subsequent pages.

Keywords: cell phone, e-marking scheme (eMS), mobile phone, mobile-smart phone, multiple choice objectives (MCO), smartphone

Procedia PDF Downloads 263
5442 Thermal Performance of an Air-Water Heat Exchanger (AWHE) Operating in Groundwater and Hot-Humid Climate

Authors: César Ramírez-Dolores, Jorge Wong-Loya, Jorge Andaverde, Caleb Becerra

Abstract:

Low-depth geothermal energy can take advantage of the use of the subsoil as an air conditioning technique, being used as a passive system or coupled to an active cooling and/or heating system. This source of air conditioning is possible because at a depth less than 10 meters, the subsoil temperature is practically homogeneous and tends to be constant regardless of the climatic conditions on the surface. The effect of temperature fluctuations on the soil surface decreases as depth increases due to the thermal inertia of the soil, causing temperature stability; this effect presents several advantages in the context of sustainable energy use. In the present work, the thermal behavior of a horizontal Air-Water Heat Exchanger (AWHE) is evaluated, and the thermal effectiveness and temperature of the air at the outlet of the prototype immersed in groundwater is experimentally determined. The thermohydraulic aspects of the heat exchanger were evaluated using the Number of Transfer Units-Efficiency (NTU-ε) method under conditions of groundwater flow in a coastal region of sandy soil (southeastern Mexico) and air flow induced by a blower, the system was constructed of polyvinyl chloride (PVC) and sensors were placed in both the exchanger and the water to record temperature changes. The results of this study indicate that when the exchanger operates in groundwater, it shows high thermal gains allowing better heat transfer, therefore, it significantly reduces the air temperature at the outlet of the system, which increases the thermal effectiveness of the system in values > 80%, this passive technique is relevant for building cooling applications and could represent a significant development in terms of thermal comfort for hot locations in emerging economy countries.

Keywords: convection, earth, geothermal energy, thermal comfort

Procedia PDF Downloads 73
5441 A Review of Blog Assisted Language Learning Research: Based on Bibliometric Analysis

Authors: Bo Ning Lyu

Abstract:

Blog assisted language learning (BALL) has been trialed by educators in language teaching with the development of Web 2.0 technology. Understanding the development trend of related research helps grasp the whole picture of the use of blog in language education. This paper reviews current research related to blogs enhanced language learning based on bibliometric analysis, aiming at (1) identifying the most frequently used keywords and their co-occurrence, (2) clustering research topics based on co-citation analysis, (3) finding the most frequently cited studies and authors and (4) constructing the co-authorship network. 330 articles were searched out in Web of Science, 225 peer-viewed journal papers were finally collected according to selection criteria. Bibexcel and VOSviewer were used to visualize the results. Studies reviewed were published between 2005 to 2016, most in the year of 2014 and 2015 (35 papers respectively). The top 10 most frequently appeared keywords are learning, language, blog, teaching, writing, social, web 2.0, technology, English, communication. 8 research themes could be clustered by co-citation analysis: blogging for collaborative learning, blogging for writing skills, blogging in higher education, feedback via blogs, blogging for self-regulated learning, implementation of using blogs in classroom, comparative studies and audio/video blogs. Early studies focused on the introduction of the classroom implementation while recent studies moved to the audio/video blogs from their traditional usage. By reviewing the research related to BALL quantitatively and objectively, this paper reveals the evolution and development trends as well as identifies influential research, helping researchers and educators quickly grasp this field overall and conducting further studies.

Keywords: blog, bibliometric analysis, language learning, literature review

Procedia PDF Downloads 211
5440 Undoped and Fluorine Doped Zinc Oxide (ZnO:F) Thin Films Deposited by Ultrasonic Chemical Spray: Effect of the Solution on the Electrical and Optical Properties

Authors: E. Chávez-Vargas, M. de la L. Olvera-Amador, A. Jimenez-Gonzalez, A. Maldonado

Abstract:

Undoped and fluorine doped zinc oxide (ZnO) thin films were deposited on sodocalcic glass substrates by the ultrasonic chemical spray technique. As the main goal is the manufacturing of transparent electrodes, the effects of both the solution composition and the substrate temperature on both the electrical and optical properties of ZnO thin films were studied. As a matter of fact, the effect of fluorine concentration ([F]/[F+Zn] at. %), solvent composition (acetic acid, water, methanol ratios) and ageing time, regarding solution composition, were varied. In addition, the substrate temperature and the deposition time, regarding the chemical spray technique, were also varied. Structural studies confirm the deposition of polycrystalline, hexagonal, wurtzite type, ZnO. The results show that the increase of ([F]/[F+Zn] at. %) ratio in the solution, decreases the sheet resistance, RS, of the ZnO:F films, reaching a minimum, in the order of 1.6 Ωcm, at 60 at. %; further increase in the ([F]/[F+Zn]) ratio increases the RS of the films. The same trend occurs with the variation in substrate temperature, as a minimum RS of ZnO:F thin films was encountered when deposited at TS= 450 °C. ZnO:F thin films deposited with aged solution show a significant decrease in the RS in the order of 100 ΩS. The transmittance of the films was also favorable affected by the solvent ratio and, more significantly, by the ageing of the solution. The whole evaluation of optical and electrical characteristics of the ZnO:F thin films deposited under different conditions, was done under Haacke’s figure of Merit in order to have a clear and quantitative trend as transparent conductors application.

Keywords: zinc oxide, ZnO:F, TCO, Haacke’s figure of Merit

Procedia PDF Downloads 314
5439 An Analysis of the Panel’s Perceptions on Cooking in “Metaverse Kitchen”

Authors: Minsun Kim

Abstract:

This study uses the concepts of augmented reality, virtual reality, mirror world, and lifelogging to describe “Metaverse Kitchen” that can be defined as a space in the virtual world where users can cook the dishes they want using the meal kit regardless of location or time. This study examined expert’s perceptions of cooking and food delivery services using "Metaverse Kitchen." In this study, a consensus opinion on the concept, potential pros, and cons of "Metaverse Kitchen" was derived from 20 culinary experts through the Delphi technique. The three Delphi rounds were conducted for one month, from December 2022 to January 2023. The results are as follows. First, users select and cook food after visiting the "Metaverse Kitchen" in the virtual space. Second, when a user cooks in "Metaverse Kitchen" in AR or VR, the information is transmitted to nearby restaurants. Third, the platform operating the "Metaverse Kitchen" assigns the order to the restaurant that can provide the meal kit cooked by the user in the virtual space first in the same way among these restaurants. Fourth, the user pays for the "Metaverse Kitchen", and the restaurant delivers the cooked meal kit to the user and then receives payment for the user's meal and delivery fee from the platform. Fifth, the platform company that operates the mirror world "Metaverse Kitchen" uses lifelogging to manage customers. They receive commissions from users and affiliated restaurants and operate virtual restaurant businesses using meal kits. Among the selection attributes for meal kits provided in "Metaverse Kitchen", the panelists suggested convenience, quality, and reliability as advantages and predicted relatively high price as a disadvantage. "Metaverse Kitchen" using meal kits is expected to form a new food supply system in the future society. In follow-up studies, an empirical analysis is required targeting producers and consumers.

Keywords: metaverse, meal kits, Delphi technique, Metaverse Kitchen

Procedia PDF Downloads 222
5438 Robust Method for Evaluation of Catchment Response to Rainfall Variations Using Vegetation Indices and Surface Temperature

Authors: Revalin Herdianto

Abstract:

Recent climate changes increase uncertainties in vegetation conditions such as health and biomass globally and locally. The detection is, however, difficult due to the spatial and temporal scale of vegetation coverage. Due to unique vegetation response to its environmental conditions such as water availability, the interplay between vegetation dynamics and hydrologic conditions leave a signature in their feedback relationship. Vegetation indices (VI) depict vegetation biomass and photosynthetic capacity that indicate vegetation dynamics as a response to variables including hydrologic conditions and microclimate factors such as rainfall characteristics and land surface temperature (LST). It is hypothesized that the signature may be depicted by VI in its relationship with other variables. To study this signature, several catchments in Asia, Australia, and Indonesia were analysed to assess the variations in hydrologic characteristics with vegetation types. Methods used in this study includes geographic identification and pixel marking for studied catchments, analysing time series of VI and LST of the marked pixels, smoothing technique using Savitzky-Golay filter, which is effective for large area and extensive data. Time series of VI, LST, and rainfall from satellite and ground stations coupled with digital elevation models were analysed and presented. This study found that the hydrologic response of vegetation to rainfall variations may be shown in one hydrologic year, in which a drought event can be detected a year later as a suppressed growth. However, an annual rainfall of above average do not promote growth above average as shown by VI. This technique is found to be a robust and tractable approach for assessing catchment dynamics in changing climates.

Keywords: vegetation indices, land surface temperature, vegetation dynamics, catchment

Procedia PDF Downloads 287
5437 Enhanced Optical Nonlinearity in Bismuth Borate Glass: Effect of Size of Nanoparticles

Authors: Shivani Singla, Om Prakash Pandey, Gopi Sharma

Abstract:

Metallic nanoparticle doped glasses has lead to rapid development in the field of optics. Large third order non-linearity, ultrafast time response, and a wide range of resonant absorption frequencies make these metallic nanoparticles more important in comparison to their bulk material. All these properties are highly dependent upon the size, shape, and surrounding environment of the nanoparticles. In a quest to find a suitable material for optical applications, several efforts have been devoted to improve the properties of such glasses in the past. In the present study, bismuth borate glass doped with different size gold nanoparticles (AuNPs) has been prepared using the conventional melt-quench technique. Synthesized glasses are characterized by X-ray diffraction (XRD) and Fourier Transformation Infrared spectroscopy (FTIR) to observe the structural modification in the glassy matrix with the variation in the size of the AuNPs. Glasses remain purely amorphous in nature even after the addition of AuNPs, whereas FTIR proposes that the main structure contains BO₃ and BO₄ units. Field emission scanning electron microscopy (FESEM) confirms the existence and variation in the size of AuNPs. Differential thermal analysis (DTA) depicts that prepared glasses are thermally stable and are highly suitable for the fabrication of optical fibers. The nonlinear optical parameters (nonlinear absorption coefficient and nonlinear refractive index) are calculated out by using the Z-scan technique with a Ti: sapphire laser at 800 nm. It has been concluded that the size of the nanoparticles highly influences the structural thermal and optical properties system.

Keywords: bismuth borate glass, different size, gold nanoparticles, nonlinearity

Procedia PDF Downloads 123
5436 Online Monitoring of Airborne Bioaerosols Released from a Composting, Green Waste Site

Authors: John Sodeau, David O'Connor, Shane Daly, Stig Hellebust

Abstract:

This study is the first to employ the online WIBS (Waveband Integrated Biosensor Sensor) technique for the monitoring of bioaerosol emissions and non-fluorescing “dust” released from a composting/green waste site. The purpose of the research was to provide a “proof of principle” for using WIBS to monitor such a location continually over days and nights in order to construct comparative “bioaerosol site profiles”. Current impaction/culturing methods take many days to achieve results available by the WIBS technique in seconds.The real-time data obtained was then used to assess variations of the bioaerosol counts as a function of size, “shape”, site location, working activity levels, time of day, relative humidity, wind speeds and wind directions. Three short campaigns were undertaken, one classified as a “light” workload period, another as a “heavy” workload period and finally a weekend when the site was closed. One main bioaerosol size regime was found to predominate: 0.5 micron to 3 micron with morphologies ranging from elongated to elipsoidal/spherical. The real-time number-concentration data were consistent with an Andersen sampling protocol that was employed at the site. The number-concentrations of fluorescent particles as a proportion of total particles counted amounted, on average, to ~1% for the “light” workday period, ~7% for the “heavy” workday period and ~18% for the weekend. The bioaerosol release profiles at the weekend were considerably different from those monitored during the working weekdays.

Keywords: bioaerosols, composting, fluorescence, particle counting in real-time

Procedia PDF Downloads 356
5435 Low Overhead Dynamic Channel Selection with Cluster-Based Spatial-Temporal Station Reporting in Wireless Networks

Authors: Zeyad Abdelmageid, Xianbin Wang

Abstract:

Choosing the operational channel for a WLAN access point (AP) in WLAN networks has been a static channel assignment process initiated by the user during the deployment process of the AP, which fails to cope with the dynamic conditions of the assigned channel at the station side afterward. However, the dramatically growing number of Wi-Fi APs and stations operating in the unlicensed band has led to dynamic, distributed, and often severe interference. This highlights the urgent need for the AP to dynamically select the best overall channel of operation for the basic service set (BSS) by considering the distributed and changing channel conditions at all stations. Consequently, dynamic channel selection algorithms which consider feedback from the station side have been developed. Despite the significant performance improvement, existing channel selection algorithms suffer from very high feedback overhead. Feedback latency from the STAs, due to the high overhead, can cause the eventually selected channel to no longer be optimal for operation due to the dynamic sharing nature of the unlicensed band. This has inspired us to develop our own dynamic channel selection algorithm with reduced overhead through the proposed low-overhead, cluster-based station reporting mechanism. The main idea behind the cluster-based station reporting is the observation that STAs which are very close to each other tend to have very similar channel conditions. Instead of requesting each STA to report on every candidate channel while causing high overhead, the AP divides STAs into clusters then assigns each STA in each cluster one channel to report feedback on. With the proper design of the cluster based reporting, the AP does not lose any information about the channel conditions at the station side while reducing feedback overhead. The simulation results show equal performance and, at times, better performance with a fraction of the overhead. We believe that this algorithm has great potential in designing future dynamic channel selection algorithms with low overhead.

Keywords: channel assignment, Wi-Fi networks, clustering, DBSCAN, overhead

Procedia PDF Downloads 121
5434 A Study of The STEAM Toy Pedagogy Plan Evaluation for Elementary School

Authors: Wen-Te Chang, Yun-Hsin Pai

Abstract:

Purpose: Based on the interdisciplinary of lower grade Elementary School with the integration of STEAM concept, related wooden toy and pedagogy plans were developed and evaluated. The research goal was to benefit elementary school education. Design/methodology/approach: The subjects were teachers from two primary school teachers and students from the department of design of universities in Taipei. Amount of 103participants (Male: 34, Female: 69) were invited to participate in the research. The research tools are “STEAM toy design” and “questionnaire of STEAM toy Pedagogy plan.” The STEAM toy pedagogy plans were evaluated after the activity of “The interdisciplinary literacy discipline guiding study program--STEAM wooden workshop,” Finding/results: The study results: (1) As factors analyzing of the questionnaire indicated the percentage on the major factors were cognition teaching 68.61%, affection 80.18% and technique 80.14%, with α=.936 of validity. The assessment tools were proved to be valid for STEAM pedagogy plan evaluation; (2) The analysis of the questionnaires investigation confirmed that the main effect of the teaching factors was not significant (affection = technique = cognition); however, the interaction between STEAM factors revealed to be significant (F (8, 1164) =5.51, p < .01); (3) The main effect of the six pedagogy plans was significant (climbing toy > bird toy = gondola toy > frog castanets > train toy > balancing toy), and an interactive effect between STEAM factors also reached a significant level, (F (8, 1164) =5.51, p < .01), especially on the artistic (A/ Art) aspect. Originality/value: The main achievement of research: (1) A pedagogy plan evaluation was successfully developed. (2) The interactive effect between the STEAM and the teaching factors reached a significant level. (3) An interactive effect between the STEAM factors and the pedagogy plans reached a significant level too.

Keywords: STEAM, toy design, pedagogy plans, evaluation

Procedia PDF Downloads 284
5433 Analysis of Travel Behavior Patterns of Frequent Passengers after the Section Shutdown of Urban Rail Transit - Taking the Huaqiao Section of Shanghai Metro Line 11 Shutdown During the COVID-19 Epidemic as an Example

Authors: Hongyun Li, Zhibin Jiang

Abstract:

The travel of passengers in the urban rail transit network is influenced by changes in network structure and operational status, and the response of individual travel preferences to these changes also varies. Firstly, the influence of the suspension of urban rail transit line sections on passenger travel along the line is analyzed. Secondly, passenger travel trajectories containing multi-dimensional semantics are described based on network UD data. Next, passenger panel data based on spatio-temporal sequences is constructed to achieve frequent passenger clustering. Then, the Graph Convolutional Network (GCN) is used to model and identify the changes in travel modes of different types of frequent passengers. Finally, taking Shanghai Metro Line 11 as an example, the travel behavior patterns of frequent passengers after the Huaqiao section shutdown during the COVID-19 epidemic are analyzed. The results showed that after the section shutdown, most passengers would transfer to the nearest Anting station for boarding, while some passengers would transfer to other stations for boarding or cancel their travels directly. Among the passengers who transferred to Anting station for boarding, most of passengers maintained the original normalized travel mode, a small number of passengers waited for a few days before transferring to Anting station for boarding, and only a few number of passengers stopped traveling at Anting station or transferred to other stations after a few days of boarding on Anting station. The results can provide a basis for understanding urban rail transit passenger travel patterns and improving the accuracy of passenger flow prediction in abnormal operation scenarios.

Keywords: urban rail transit, section shutdown, frequent passenger, travel behavior pattern

Procedia PDF Downloads 86
5432 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks

Authors: Ashkan Ebadi, Adam Krzyzak

Abstract:

Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.

Keywords: tourism, hotel recommender system, hybrid, implicit features

Procedia PDF Downloads 274
5431 Response of Wheat and Lentil to Herbicides Applied in the Preceding Non-Puddled Transplanted Rainy Season Rice

Authors: Taslima Zahan

Abstract:

A field study was done in 2013-14 and 2014-15 by following bio-assay technique to determine the carryover effect of herbicides applied in rainy season rice on growth and yield of two probable succeeding crops of rice viz., wheat and lentil. Rice seedlings were transplanted on strip-tilled non-puddled field, and five herbicides named pyrazosufuron-ethyl, butachlor, orthosulfamuron, butachlor + propanil and 2,4-D amine were applied in rice at their recommended rate and time as eight treatment combinations and compared with one untreated control. Residual effects of those rice herbicides on the succeeding wheat and lentil were examined by following micro-plot bioassay technique. The study revealed that germination of wheat and lentil seeds were not affected by the residue of herbicides applied in the preceding rainy season rice. Shoot length of wheat and lentil seedlings of herbicide treated plots were also non-significantly varied with untreated control plots. Herbicide treated plots of wheat had higher leaf chlorophyll contents over the control plots by 1.8-14.0% on an average while in case of lentil herbicide treated plots had negligible amount of reduction in leaf chlorophyll contents than control plots. Grain yields of wheat and lentil in herbicide treated plots were higher than control plots by 2.8-6.6% and 0.2-10.9%, respectively. Therefore, two-year bioassay study claimed that tested herbicides applied in rainy season rice under strip-tilled non-puddled field had no adverse residual effect on growth and yield of the succeeding wheat and lentil.

Keywords: crop sensitivity, herbicide persistence, minimum tillage rice, yield improvement

Procedia PDF Downloads 161
5430 E-Learning Recommender System Based on Collaborative Filtering and Ontology

Authors: John Tarus, Zhendong Niu, Bakhti Khadidja

Abstract:

In recent years, e-learning recommender systems has attracted great attention as a solution towards addressing the problem of information overload in e-learning environments and providing relevant recommendations to online learners. E-learning recommenders continue to play an increasing educational role in aiding learners to find appropriate learning materials to support the achievement of their learning goals. Although general recommender systems have recorded significant success in solving the problem of information overload in e-commerce domains and providing accurate recommendations, e-learning recommender systems on the other hand still face some issues arising from differences in learner characteristics such as learning style, skill level and study level. Conventional recommendation techniques such as collaborative filtering and content-based deal with only two types of entities namely users and items with their ratings. These conventional recommender systems do not take into account the learner characteristics in their recommendation process. Therefore, conventional recommendation techniques cannot make accurate and personalized recommendations in e-learning environment. In this paper, we propose a recommendation technique combining collaborative filtering and ontology to recommend personalized learning materials to online learners. Ontology is used to incorporate the learner characteristics into the recommendation process alongside the ratings while collaborate filtering predicts ratings and generate recommendations. Furthermore, ontological knowledge is used by the recommender system at the initial stages in the absence of ratings to alleviate the cold-start problem. Evaluation results show that our proposed recommendation technique outperforms collaborative filtering on its own in terms of personalization and recommendation accuracy.

Keywords: collaborative filtering, e-learning, ontology, recommender system

Procedia PDF Downloads 386
5429 Optimization of Interface Radio of Universal Mobile Telecommunication System Network

Authors: O. Mohamed Amine, A. Khireddine

Abstract:

Telecoms operators are always looking to meet their share of the other customers, they try to gain optimum utilization of the deployed equipment and network optimization has become essential. This project consists of optimizing UMTS network, and the study area is an urban area situated in the center of Algiers. It was initially questions to become familiar with the different communication systems (3G) and the optimization technique, its main components, and its fundamental characteristics radios were introduced.

Keywords: UMTS, UTRAN, WCDMA, optimization

Procedia PDF Downloads 386