Search results for: net present value method
27157 Research of the Activation Energy of Conductivity in P-I-N SiC Structures Fabricated by Doping with Aluminum Using the Low-Temperature Diffusion Method
Authors: Ilkham Gafurovich Atabaev, Khimmatali Nomozovich Juraev
Abstract:
The activation energy of conductivity in p-i-n SiC structures fabricated by doping with Aluminum using the new low-temperature diffusion method is investigated. In this method, diffusion is stimulated by the flux of carbon and silicon vacancies created by surface oxidation. The activation energy of conductivity in the p - layer is 0.25 eV and it is close to the ionization energy of Aluminum in 4H-SiC from 0.21 to 0.27 eV for the hexagonal and cubic positions of aluminum in the silicon sublattice for weakly doped crystals. The conductivity of the i-layer (measured in the reverse biased diode) shows 2 activation energies: 0.02 eV and 0.62 eV. Apparently, the 0.62 eV level is a deep trap level and it is a complex of Aluminum with a vacancy. According to the published data, an analogous level system (with activation energies of 0.05, 0.07, 0.09 and 0.67 eV) was observed in the ion Aluminum doped 4H-SiC samples.Keywords: activation energy, aluminum, low temperature diffusion, SiC
Procedia PDF Downloads 28327156 Challenges in Implementing the Inculcation of Noble Values During Teaching by Primary Schools Teachers in Peninsular Malaysia
Authors: Mohamad Khairi Haji Othman, Mohd Zailani Mohd Yusoff, Rozalina Khalid
Abstract:
The inculcation of noble values in teaching and learning is very important, especially to build students with good characters and values. Therefore, the purpose of this research is to identify the challenges of implementing the inculcation of noble values in teaching in primary schools. This study was conducted at four North Zone Peninsular Malaysia schools. This study was used a qualitative approach in the form of case studies. The qualitative approach aims at gaining meaning and a deep understanding of the phenomenon studied from the perspectives of the study participants and not intended to make the generalization. The sample in this study consists of eight teachers who teach in four types of schools that have been chosen purposively. The method of data collection is through semi-structured interviews used in this study. The comparative method is continuously used in this study to analyze the primary data collected. The study found that the main challenges faced by teachers were students' problems and class control so that teachers felt difficult to the inculcation of noble values in teaching. In addition, the language challenge is difficult for students to understand. Similarly, peers are also challenging because students are more easily influenced by friends rather than listening to teachers' instructions. The last challenge was the influence of technology and mass media electronic more widespread. The findings suggest that teachers need to innovate in order to assist the school in inculcating religious and moral education towards the students. The school through guidance and counseling teachers can also plan some activities that are appropriate to the student's present condition. Through this study, teachers and the school should work together to develop the values of students in line with the needs of the National Education Philosophy that wishes to produce intelligent, emotional, spiritual, intellectual and social human capital.Keywords: challenges, implementation, inculcation, noble values
Procedia PDF Downloads 19027155 Drinking Water Quality Assessment Using Fuzzy Inference System Method: A Case Study of Rome, Italy
Authors: Yas Barzegar, Atrin Barzegar
Abstract:
Drinking water quality assessment is a major issue today; technology and practices are continuously improving; Artificial Intelligence (AI) methods prove their efficiency in this domain. The current research seeks a hierarchical fuzzy model for predicting drinking water quality in Rome (Italy). The Mamdani fuzzy inference system (FIS) is applied with different defuzzification methods. The Proposed Model includes three fuzzy intermediate models and one fuzzy final model. Each fuzzy model consists of three input parameters and 27 fuzzy rules. The model is developed for water quality assessment with a dataset considering nine parameters (Alkalinity, Hardness, pH, Ca, Mg, Fluoride, Sulphate, Nitrates, and Iron). Fuzzy-logic-based methods have been demonstrated to be appropriate to address uncertainty and subjectivity in drinking water quality assessment; it is an effective method for managing complicated, uncertain water systems and predicting drinking water quality. The FIS method can provide an effective solution to complex systems; this method can be modified easily to improve performance.Keywords: water quality, fuzzy logic, smart cities, water attribute, fuzzy inference system, membership function
Procedia PDF Downloads 8127154 Author Name Disambiguation for Biomedical Literature
Authors: Parthiban Srinivasan
Abstract:
PubMed provides online access to the National Library of Medicine database (MEDLINE) and other publications, which contain close to 25 million scientific citations from 1865 to the present. There are close to 80 million author name instances in those close to 25 million citations. For any work of literature, a fundamental issue is to identify the individual(s) who wrote it, and conversely, to identify all of the works that belong to a given individual. Due to the lack of universal standards for name information, there are two aspects of name ambiguity: name synonymy (a single author with multiple name representations), and name homonymy (multiple authors sharing the same name representation). In this talk, we present some results from our extensive work in author name disambiguation for PubMed citations. Information will be presented on the effectiveness and shortcomings of different aspects of successful name disambiguation such as parsing, validation, standardization and normalization.Keywords: disambiguation, normalization, parsing, PubMed
Procedia PDF Downloads 30227153 Nonlinear Free Surface Flow Simulations Using Smoothed Particle Hydrodynamics
Authors: Abdelraheem M. Aly, Minh Tuan Nguyen, Sang-Wook Lee
Abstract:
The incompressible smoothed particle hydrodynamics (ISPH) is used to simulate impact free surface flows. In the ISPH, pressure is evaluated by solving pressure Poisson equation using a semi-implicit algorithm based on the projection method. The current ISPH method is applied to simulate dam break flow over an inclined plane with different inclination angles. The effects of inclination angle in the velocity of wave front and pressure distribution is discussed. The impact of circular cylinder over water in tank has also been simulated using ISPH method. The computed pressures on the solid boundaries is studied and compared with the experimental results.Keywords: incompressible smoothed particle hydrodynamics, free surface flow, inclined plane, water entry impact
Procedia PDF Downloads 40627152 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method
Authors: Mohamad R. Moshtagh, Ahmad Bagheri
Abstract:
Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.Keywords: fault detection, gearbox, machine learning, wiener method
Procedia PDF Downloads 8927151 A Nonstandard Finite Difference Method for Weather Derivatives Pricing Model
Authors: Clarinda Vitorino Nhangumbe, Fredericks Ebrahim, Betuel Canhanga
Abstract:
The price of an option weather derivatives can be approximated as a solution of the two-dimensional convection-diffusion dominant partial differential equation derived from the Ornstein-Uhlenbeck process, where one variable represents the weather dynamics and the other variable represent the underlying weather index. With appropriate financial boundary conditions, the solution of the pricing equation is approximated using a nonstandard finite difference method. It is shown that the proposed numerical scheme preserves positivity as well as stability and consistency. In order to illustrate the accuracy of the method, the numerical results are compared with other methods. The model is tested for real weather data.Keywords: nonstandard finite differences, Ornstein-Uhlenbeck process, partial differential equations approach, weather derivatives
Procedia PDF Downloads 12027150 In-Silico Evaluation and Antihyperglycemic Potential of Leucas Cephalotes
Authors: Anjali Verma, Mahesh Pal, Veena Pande, Dalip Kumar Upreti
Abstract:
The present study is carried out to explore the anti-hyperglycemic activity of Leucas cephalotes plant parts. A fruit, leaves, stems, and roots part of the Leucas cephalotes has been extracted in ethanol and have been evaluated for anti-hyperglycemic activity. The present study indicated that, ethanolic extract of fruit and leaves have shown significant α- amylase inhibitory activity with IC50 value of 92.86 ± 0.89 μg/mL and 98.09 ± 0.69 μg/mL respectively. Two known compounds β-sitosterol and lupeol were isolated from ethanolic extract of L. cephalotes leaves and were subjected to anti-hyperglycemic activity. Lupeol shows the best activity with IC50 55.73 ± 0.47 μg/mL and the results were verified by docking study of these compounds with mammalian α-amylase was carried out on its active site. It was concluded from the study that β-sitosterol and lupeol form one H-bond interactions with the active site residues either Asp212 or Thr21. The estimated free energy binding of β-sitosterol was found to be -9.47 kcal mol-1 with an estimated inhibition constant (Ki) of 558.94 nmol whereas the estimated free energy binding of lupeol was -11.73 kcal mol-1 with an estimated inhibition constant (Ki) of 476.71pmmol. The present study clearly showed that lupeol is more potent in comparison to β-sitosterol. The study indicates that L. cephalotes have significant potential to inhibit α-amylase enzyme.Keywords: alpha-amylase, beta-sitosterol, hyperglycemia, lupeol
Procedia PDF Downloads 21627149 Highly Sensitive Nanostructured Chromium Oxide Sensor for Analysis of Diabetic Patient’s Breath
Authors: Nipin Kohli, Ravi Chand Singh
Abstract:
Diabetes mellitus is a serious illness and can be life-threatening if left untreated. Acetone present in the exhaled breath of a diabetic person is a biomarker of patients suffering from diabetes mellitus and is higher than its usual concentration present in the breath of healthy people. In the present work, a portable gas sensor system based on chromium oxide (Cr₂O₃) nanoparticles has been developed that can analyze diabetic patient’s breath. Undoped and indium (In) doped Cr₂O₃ nanoparticles were synthesized by a chemical route and characterized by X-ray diffraction, scanning electron microscopy, Raman spectroscopy, UV-visible spectroscopy, and photoluminescence spectroscopy for their structural, morphological and optical properties. Thick film gas sensors were fabricated out of synthesized samples. To diagnose diabetes, the sensors’ response to low concentrations of acetone was measured, and it was found that the addition of indium dramatically enhances the acetone gas sensing response. Moreover, the fabricated sensors were highly stable, reproducible and resistant to humidity. Enhancement of sensor response of doped sensors towards acetone can be ascribed to increase in defects due to addition of a dopant, and it was found that in-doped Cr₂O₃ sensors are more useful for analysis of breath of diabetic patients.Keywords: Diabetes mellitus, nanoparticles, raman spectroscopy, sensor
Procedia PDF Downloads 14727148 An Application of Extreme Value Theory as a Risk Measurement Approach in Frontier Markets
Authors: Dany Ng Cheong Vee, Preethee Nunkoo Gonpot, Noor Sookia
Abstract:
In this paper, we consider the application of Extreme Value Theory as a risk measurement tool. The Value at Risk, for a set of indices, from six Stock Exchanges of Frontier markets is calculated using the Peaks over Threshold method and the performance of the model index-wise is evaluated using coverage tests and loss functions. Our results show that 'fat-tailedness' alone of the data is not enough to justify the use of EVT as a VaR approach. The structure of the returns dynamics is also a determining factor. This approach works fine in markets which have had extremes occurring in the past thus making the model capable of coping with extremes coming up (Colombo, Tunisia and Zagreb Stock Exchanges). On the other hand, we find that indices with lower past than present volatility fail to adequately deal with future extremes (Mauritius and Kazakhstan). We also conclude that using EVT alone produces quite static VaR figures not reflecting the actual dynamics of the data.Keywords: extreme value theory, financial crisis 2008, value at risk, frontier markets
Procedia PDF Downloads 27927147 An Image Stitching Approach for Scoliosis Analysis
Authors: Siti Salbiah Samsudin, Hamzah Arof, Ainuddin Wahid Abdul Wahab, Mohd Yamani Idna Idris
Abstract:
Standard X-ray spine images produced by conventional screen-film technique have a limited field of view. This limitation may obstruct a complete inspection of the spine unless images of different parts of the spine are placed next to each other contiguously to form a complete structure. Another solution to producing a whole spine image is by assembling the digitized x-ray images of its parts automatically using image stitching. This paper presents a new Medical Image Stitching (MIS) method that utilizes Minimum Average Correlation Energy (MACE) filters to identify and merge pairs of x-ray medical images. The effectiveness of the proposed method is demonstrated in two sets of experiments involving two databases which contain a total of 40 pairs of overlapping and non-overlapping spine images. The experimental results are compared to those produced by the Normalized Cross Correlation (NCC) and Phase Only Correlation (POC) methods for comparison. It is found that the proposed method outperforms those of the NCC and POC methods in identifying both the overlapping and non-overlapping medical images. The efficacy of the proposed method is further vindicated by its average execution time which is about two to five times shorter than those of the POC and NCC methods.Keywords: image stitching, MACE filter, panorama image, scoliosis
Procedia PDF Downloads 46427146 Examining Media Literacy Strategies through Questionnaires and Analyzing the Behavioral Patterns of Middle-Aged and Elderly Persons
Authors: Chia Yen Li, Wen Huei Chou, Mieko Ohsuga, Tsuyoshi Inoue
Abstract:
The evolution of the digital age has led to people’s lives being pervaded by both facts and misinformation, challenging media literacy (ML). Middle-aged and elderly persons (MEPs) are prone to disseminating large amounts of misinformation, which often endangers their lives due to erroneously believing such information. At present, several countries have actively established fact-checking platforms to combat misinformation, but they are unable to keep pace with the rapid proliferation of such information on social media. In this study, the questionnaire survey method was used to collect data on MEPs’ behavior, cognition, attitudes, and concepts of social media when using a mobile instant messaging app called LINE; analyze their behavioral patterns and reasons for sharing misinformation; and summarize design strategies for improving their ML. The findings can serve as a reference in future related research.Keywords: media literacy, middle-aged and elderly persons, social media, misinformation
Procedia PDF Downloads 11527145 Numerical Studies on Bypass Thrust Augmentation Using Convective Heat Transfer in Turbofan Engine
Authors: R. Adwaith, J. Gopinath, Vasantha Kohila B., R. Chandru, Arul Prakash R.
Abstract:
The turbofan engine is a type of air breathing engine that is widely used in aircraft propulsion produces thrust mainly from the mass-flow of air bypassing the engine core. The present research has developed an effective method numerically by increasing the thrust generated from the bypass air. This thrust increase is brought about by heating the walls of the bypass valve from the combustion chamber using convective heat transfer method. It is achieved computationally by the use external heat to enhance the velocity of bypass air of turbofan engines. The bypass valves are either heated externally using multicell tube resistor which convert electricity generated by dynamos into heat or heat is transferred from the combustion chamber. This increases the temperature of the flow in the valves and thereby increase the velocity of the flow that enters the nozzle of the engine. As a result, mass-flow of air passing the core engine for producing more thrust can be significantly reduced thereby saving considerable amount of Jet fuel. Numerical analysis has been carried out on a scaled down version of a typical turbofan bypass valve, where the valve wall temperature has been increased to 700 Kelvin. It is observed from the analysis that, the exit velocity contributing to thrust has significantly increased by 10 % due to the heating of by-pass valve. The degree of optimum increase in the temperature, and the corresponding effect in the increase of jet velocity is calculated to determine the operating temperature range for efficient increase in velocity. The technique used in the research increases the thrust by using heated by-pass air without extracting much work from the fuel and thus improve the efficiency of existing turbofan engines. Dimensional analysis has been carried to prove the accuracy of the results obtained numerically.Keywords: turbofan engine, bypass valve, multi-cell tube, convective heat transfer, thrust
Procedia PDF Downloads 36027144 Hydrometallurgical Treatment of Abu Ghalaga Ilmenite Ore
Authors: I. A. Ibrahim, T. A. Elbarbary, N. Abdelaty, A. T. Kandil, H. K. Farhan
Abstract:
The present work aims to study the leaching of Abu Ghalaga ilmenite ore by hydrochloric acid and simultaneous reduction by iron powder method to dissolve its titanium and iron contents. Iron content in the produced liquor is separated by solvent extraction using TBP as a solvent. All parameters affecting the efficiency of the dissolution process were separately studied including the acid concentration, solid/liquid ratio which controls the ilmenite/acid molar ratio, temperature, time and grain size. The optimum conditions at which maximum leaching occur are 30% HCl acid with a solid/liquid ratio of 1/30 at 80 °C for 4 h using ore ground to -350 mesh size. At the same time, all parameters affecting on solvent extraction and stripping of iron content from the produced liquor were studied. Results show that the best extraction is at solvent/solution 1/1 by shaking at 240 RPM for 45 minutes at 30 °C whereas best striping of iron at H₂O/solvent 2/1.Keywords: ilmenite ore, leaching, titanium solvent extraction, Abu Ghalaga ilmenite ore
Procedia PDF Downloads 29327143 Wearable Music: Generation of Costumes from Music and Generative Art and Wearing Them by 3-Way Projectors
Authors: Noriki Amano
Abstract:
The final goal of this study is to create another way in which people enjoy music through the performance of 'Wearable Music'. Concretely speaking, we generate colorful costumes in real- time from music and to realize their dressing by projecting them to a person. For this purpose, we propose three methods in this study. First, a method of giving color to music in a three-dimensionally way. Second, a method of generating images of costumes from music. Third, a method of wearing the images of music. In particular, this study stands out from other related work in that we generate images of unique costumes from music and realize to wear them. In this study, we use the technique of generative arts to generate images of unique costumes and project the images to the fog generated around a person from 3-way using projectors. From this study, we can get how to enjoy music as 'wearable'. Furthermore, we are also able to have the prospect of unconventional entertainment based on the fusion between music and costumes.Keywords: entertainment computing, costumes, music, generative programming
Procedia PDF Downloads 17727142 Taguchi Method for Analyzing a Flexible Integrated Logistics Network
Authors: E. Behmanesh, J. Pannek
Abstract:
Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method
Procedia PDF Downloads 19327141 Bright, Dark N-Soliton Solution of Fokas-Lenells Equation Using Hirota Bilinearization Method
Authors: Sagardeep Talukdar, Riki Dutta, Gautam Kumar Saharia, Sudipta Nandy
Abstract:
In non-linear optics, the Fokas-Lenells equation (FLE) is a well-known integrable equation that describes how ultrashort pulses move across the optical fiber. It admits localized wave solutions, just like any other integrable equation. We apply the Hirota bilinearization method to obtain the soliton solution of FLE. The proposed bilinearization makes use of an auxiliary function. We apply the method to FLE with a vanishing boundary condition, that is, to obtain a bright soliton solution. We have obtained bright 1-soliton and 2-soliton solutions and propose a scheme for obtaining an N-soliton solution. We have used an additional parameter that is responsible for the shift in the position of the soliton. Further analysis of the 2-soliton solution is done by asymptotic analysis. In the non-vanishing boundary condition, we obtain the dark 1-soliton solution. We discover that the suggested bilinearization approach, which makes use of the auxiliary function, greatly simplifies the process while still producing the desired outcome. We think that the current analysis will be helpful in understanding how FLE is used in nonlinear optics and other areas of physics.Keywords: asymptotic analysis, fokas-lenells equation, hirota bilinearization method, soliton
Procedia PDF Downloads 11827140 Research and Development of Methodology, Tools, Techniques and Methods to Analyze and Design Interface, Media, Pedagogy for Educational Topics to be Delivered via Mobile Technology
Authors: Shimaa Nagro, Russell Campion
Abstract:
Mobile devices are becoming ever more widely available, with growing functionality, and they are increasingly used as enabling technology to give students access to educational material anytime and anywhere. However, the design of educational material's user interfaces for mobile devices is beset by many unresolved research problems such as those arising from constraints associated with mobile devices or from issues linked to effective learning. The proposed research aims to produce: (i) a method framework for the design and evaluation of educational material’s interfaces to be delivered on mobile devices, in multimedia form based on Human Computer Interaction strategies; and (ii) a software tool implemented as a fast-track alternative to use the method framework in full. The investigation will combine qualitative and quantitative methods, including interviews and questionnaires for data collection and three case studies for validating the method framework. The method framework is a framework to enable an educational designer to effectively and efficiently create educational multimedia interfaces to be used on mobile devices by following a particular methodology that contains practical and usable tools and techniques. It is a method framework that accepts any educational material in its final lesson plan and deals with this plan as a static element, it will not suggest any changes in any information given in the lesson plan but it will help the instructor to design his final lesson plan in a multimedia format to be presented in mobile devices.Keywords: mobile learning, M-Learn, HCI, educational multimedia, interface design
Procedia PDF Downloads 37827139 Risk Management in Industrial Supervision Projects
Authors: Érick Aragão Ribeiro, George André Pereira Thé, José Marques Soares
Abstract:
Several problems in industrial supervision software development projects may lead to the delay or cancellation of projects. These problems can be avoided or contained by using identification methods, analysis and control of risks. These procedures can give an overview of the possible problems that can happen in the projects and what are the immediate solutions. Therefore, we propose a risk management method applied to the teaching and development of industrial supervision software. The method is developed through a literature review and previous projects can be divided into phases of management and have basic features that are validated with experimental research carried out by mechatronics engineering students and professionals. The management is conducted through the stages of identification, analysis, planning, monitoring, control and communication of risks. Programmers use a method of prioritizing risks considering the gravity and the possibility of occurrence of the risk. The outputs of the method indicate which risks occurred or are about to happen. The first results indicate which risks occur at different stages of the project and what risks have a high probability of occurring. The results show the efficiency of the proposed method compared to other methods, showing the improvement of software quality and leading developers in their decisions. This new way of developing supervision software helps students identify design problems, evaluate software developed and propose effective solutions. We conclude that the risk management optimizes the development of the industrial process control software and provides higher quality to the product.Keywords: supervision software, risk management, industrial supervision, project management
Procedia PDF Downloads 36127138 Terahertz Glucose Sensors Based on Photonic Crystal Pillar Array
Authors: S. S. Sree Sanker, K. N. Madhusoodanan
Abstract:
Optical biosensors are dominant alternative for traditional analytical methods, because of their small size, simple design and high sensitivity. Photonic sensing method is one of the recent advancing technology for biosensors. It measures the change in refractive index which is induced by the difference in molecular interactions due to the change in concentration of the analyte. Glucose is an aldosic monosaccharide, which is a metabolic source in many of the organisms. The terahertz waves occupies the space between infrared and microwaves in the electromagnetic spectrum. Terahertz waves are expected to be applied to various types of sensors for detecting harmful substances in blood, cancer cells in skin and micro bacteria in vegetables. We have designed glucose sensors using silicon based 1D and 2D photonic crystal pillar arrays in terahertz frequency range. 1D photonic crystal has rectangular pillars with height 100 µm, length 1600 µm and width 50 µm. The array period of the crystal is 500 µm. 2D photonic crystal has 5×5 cylindrical pillar array with an array period of 75 µm. Height and diameter of the pillar array are 160 µm and 100 µm respectively. Two samples considered in the work are blood and glucose solution, which are labelled as sample 1 and sample 2 respectively. The proposed sensor detects the concentration of glucose in the samples from 0 to 100 mg/dL. For this, the crystal was irradiated with 0.3 to 3 THz waves. By analyzing the obtained S parameter, the refractive index of the crystal corresponding to the particular concentration of glucose was measured using the parameter retrieval method. Refractive indices of the two crystals decreased gradually with the increase in concentration of glucose in the sample. For 1D photonic crystals, a gradual decrease in refractive index was observed at 1 THz. 2D photonic crystal showed this behavior at 2 THz. The proposed sensor was simulated using CST Microwave studio. This will enable us to develop a model which can be used to characterize a glucose sensor. The present study is expected to contribute to blood glucose monitoring.Keywords: CST microwave studio, glucose sensor, photonic crystal, terahertz waves
Procedia PDF Downloads 28327137 Satellite Imagery Classification Based on Deep Convolution Network
Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu
Abstract:
Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.Keywords: satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization
Procedia PDF Downloads 30327136 A Sectional Control Method to Decrease the Accumulated Survey Error of Tunnel Installation Control Network
Authors: Yinggang Guo, Zongchun Li
Abstract:
In order to decrease the accumulated survey error of tunnel installation control network of particle accelerator, a sectional control method is proposed. Firstly, the accumulation rule of positional error with the length of the control network is obtained by simulation calculation according to the shape of the tunnel installation-control-network. Then, the RMS of horizontal positional precision of tunnel backbone control network is taken as the threshold. When the accumulated error is bigger than the threshold, the tunnel installation control network should be divided into subsections reasonably. On each segment, the middle survey station is taken as the datum for independent adjustment calculation. Finally, by taking the backbone control points as faint datums, the weighted partial parameters adjustment is performed with the adjustment results of each segment and the coordinates of backbone control points. The subsections are jointed and unified into the global coordinate system in the adjustment process. An installation control network of the linac with a length of 1.6 km is simulated. The RMS of positional deviation of the proposed method is 2.583 mm, and the RMS of the difference of positional deviation between adjacent points reaches 0.035 mm. Experimental results show that the proposed sectional control method can not only effectively decrease the accumulated survey error but also guarantee the relative positional precision of the installation control network. So it can be applied in the data processing of tunnel installation control networks, especially for large particle accelerators.Keywords: alignment, tunnel installation control network, accumulated survey error, sectional control method, datum
Procedia PDF Downloads 19527135 Design and Performance Analysis of Advanced B-Spline Algorithm for Image Resolution Enhancement
Authors: M. Z. Kurian, M. V. Chidananda Murthy, H. S. Guruprasad
Abstract:
An approach to super-resolve the low-resolution (LR) image is presented in this paper which is very useful in multimedia communication, medical image enhancement and satellite image enhancement to have a clear view of the information in the image. The proposed Advanced B-Spline method generates a high-resolution (HR) image from single LR image and tries to retain the higher frequency components such as edges in the image. This method uses B-Spline technique and Crispening. This work is evaluated qualitatively and quantitatively using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The method is also suitable for real-time applications. Different combinations of decimation and super-resolution algorithms in the presence of different noise and noise factors are tested.Keywords: advanced b-spline, image super-resolution, mean square error (MSE), peak signal to noise ratio (PSNR), resolution down converter
Procedia PDF Downloads 40227134 Optimized Real Ground Motion Scaling for Vulnerability Assessment of Building Considering the Spectral Uncertainty and Shape
Authors: Chen Bo, Wen Zengping
Abstract:
Based on the results of previous studies, we focus on the research of real ground motion selection and scaling method for structural performance-based seismic evaluation using nonlinear dynamic analysis. The input of earthquake ground motion should be determined appropriately to make them compatible with the site-specific hazard level considered. Thus, an optimized selection and scaling method are established including the use of not only Monte Carlo simulation method to create the stochastic simulation spectrum considering the multivariate lognormal distribution of target spectrum, but also a spectral shape parameter. Its applications in structural fragility analysis are demonstrated through case studies. Compared to the previous scheme with no consideration of the uncertainty of target spectrum, the method shown here can make sure that the selected records are in good agreement with the median value, standard deviation and spectral correction of the target spectrum, and greatly reveal the uncertainty feature of site-specific hazard level. Meanwhile, it can help improve computational efficiency and matching accuracy. Given the important infection of target spectrum’s uncertainty on structural seismic fragility analysis, this work can provide the reasonable and reliable basis for structural seismic evaluation under scenario earthquake environment.Keywords: ground motion selection, scaling method, seismic fragility analysis, spectral shape
Procedia PDF Downloads 29827133 Multi-Response Optimization of CNC Milling Parameters Using Taguchi Based Grey Relational Analysis for AA6061 T6 Aluminium Alloy
Authors: Varsha Singh, Kishan Fuse
Abstract:
This paper presents a study of the grey-Taguchi method to optimize CNC milling parameters of AA6061 T6 aluminium alloy. Grey-Taguchi method combines Taguchi method based design of experiments (DOE) with grey relational analysis (GRA). Multi-response optimization of different quality characteristics as surface roughness, material removal rate, cutting forces is done using grey relational analysis (GRA). The milling parameters considered for experiments include cutting speed, feed per tooth, and depth of cut. Each parameter with three levels is selected. A grey relational grade is used to estimate overall quality characteristics performance. The Taguchi’s L9 orthogonal array is used for design of experiments. MINITAB 17 software is used for optimization. Analysis of variance (ANOVA) is used to identify most influencing parameter. The experimental results show that grey relational analysis is effective method for optimizing multi-response characteristics. Optimum results are finally validated by performing confirmation test.Keywords: ANOVA, CNC milling, grey relational analysis, multi-response optimization
Procedia PDF Downloads 31427132 Usability in E-Commerce Websites: Results of Eye Tracking Evaluations
Authors: Beste Kaysı, Yasemin Topaloğlu
Abstract:
Usability is one of the most important quality attributes for web-based information systems. Specifically, for e-commerce applications, usability becomes more prominent. In this study, we aimed to explore the features that experienced users seek in e-commerce applications. We used eye tracking method in evaluations. Eye movement data are obtained from the eye-tracking method and analyzed based on task completion time, number of fixations, as well as heat map and gaze plot measures. The results of the analysis show that the eye movements of participants' are too static in certain areas and their areas of interest are scattered in many different places. It has been determined that this causes users to fail to complete their transactions. According to the findings, we outlined the issues to improve the usability of e-commerce websites. Then we propose solutions to identify the issues. In this way, it is expected that e-commerce sites will be developed which will make experienced users more satisfied.Keywords: e-commerce websites, eye tracking method, usability, website evaluations
Procedia PDF Downloads 18527131 Reliability Qualification Test Plan Derivation Method for Weibull Distributed Products
Authors: Ping Jiang, Yunyan Xing, Dian Zhang, Bo Guo
Abstract:
The reliability qualification test (RQT) is widely used in product development to qualify whether the product meets predetermined reliability requirements, which are mainly described in terms of reliability indices, for example, MTBF (Mean Time Between Failures). It is widely exercised in product development. In engineering practices, RQT plans are mandatorily referred to standards, such as MIL-STD-781 or GJB899A-2009. But these conventional RQT plans in standards are not preferred, as the test plans often require long test times or have high risks for both producer and consumer due to the fact that the methods in the standards only use the test data of the product itself. And the standards usually assume that the product is exponentially distributed, which is not suitable for a complex product other than electronics. So it is desirable to develop an RQT plan derivation method that safely shortens test time while keeping the two risks under control. To meet this end, for the product whose lifetime follows Weibull distribution, an RQT plan derivation method is developed. The merit of the method is that expert judgment is taken into account. This is implemented by applying the Bayesian method, which translates the expert judgment into prior information on product reliability. Then producer’s risk and the consumer’s risk are calculated accordingly. The procedures to derive RQT plans are also proposed in this paper. As extra information and expert judgment are added to the derivation, the derived test plans have the potential to shorten the required test time and have satisfactory low risks for both producer and consumer, compared with conventional test plans. A case study is provided to prove that when using expert judgment in deriving product test plans, the proposed method is capable of finding ideal test plans that not only reduce the two risks but also shorten the required test time as well.Keywords: expert judgment, reliability qualification test, test plan derivation, producer’s risk, consumer’s risk
Procedia PDF Downloads 14627130 Stress and Strain Analysis of Notched Bodies Subject to Non-Proportional Loadings
Authors: Ayhan Ince
Abstract:
In this paper, an analytical simplified method for calculating elasto-plastic stresses strains of notched bodies subject to non-proportional loading paths is discussed. The method was based on the Neuber notch correction, which relates the incremental elastic and elastic-plastic strain energy densities at the notch root and the material constitutive relationship. The validity of the method was presented by comparing computed results of the proposed model against finite element numerical data of notched shaft. The comparison showed that the model estimated notch-root elasto-plastic stresses strains with good accuracy using linear-elastic stresses. The prosed model provides more efficient and simple analysis method preferable to expensive experimental component tests and more complex and time consuming incremental non-linear FE analysis. The model is particularly suitable to perform fatigue life and fatigue damage estimates of notched components subjected to non-proportional loading paths.Keywords: elasto-plastic, stress-strain, notch analysis, nonprortional loadings, cyclic plasticity, fatigue
Procedia PDF Downloads 46927129 A Theoretical Study of Accelerating Neutrons in LINAC Using Magnetic Gradient Method
Authors: Chunduru Amareswara Prasad
Abstract:
The main aim of this proposal it to reveal the secrets of the universe by accelerating neutrons. The proposal idea in its abridged version speaks about the possibility of making neutrons accelerate with help of thermal energy and magnetic energy under controlled conditions. Which is helpful in revealing the hidden secrets of the universe namely dark energy and in finding properties of Higgs boson. The paper mainly speaks about accelerating neutrons to near velocity of light in a LINAC, using magnetic energy by magnetic pressurizers. The center of mass energy of two colliding neutron beams is 94 GeV (~0.5c) can be achieved using this method. The conventional ways to accelerate neutrons has some constraints in accelerating them electromagnetically as they need to be separated from the Tritium or Deuterium nuclei. This magnetic gradient method provides efficient and simple way to accelerate neutrons.Keywords: neutron, acceleration, thermal energy, magnetic energy, Higgs boson
Procedia PDF Downloads 33027128 Female Autism Spectrum Disorder and Understanding Rigid Repetitive Behaviors
Authors: Erin Micali, Katerina Tolstikova, Cheryl Maykel, Elizabeth Harwood
Abstract:
Female ASD is seldomly studied separately from males. Further, females with ASD are disproportionately underrepresented in the research at a rate of 3:1 (male to female). As such, much of the current understanding about female rigid repetitive behaviors (RRBs) stems from research’s understanding of male RRBs. This can be detrimental to understanding female ASD because this understanding of female RRBs largely discounts female camouflaging and the possibility that females present their autistic symptoms differently. Current literature suggests that females with ASD engage in fewer RRBs than males with ASD and when females do engage in RRBs, they are likely to engage in more subtle, less overt obsessions and repetitive behaviors than males. Method: The current study utilized a mixed methods research design to identify the type and frequency of RRBs that females with ASD engaged in by using a cross-sectional design. The researcher recruited only females to be part of the present study with the criteria they be at least age six and not have co-occurring cognitive impairment. Results: The researcher collected previous testing data (Autism Diagnostic Interview-Revised (ADI-R), Child or Adolescent/Adult Sensory Profile-2, Autism/ Empathy Quotient, Yale Brown Obsessive Compulsive Checklist, Rigid Repetitive Behavior Checklist (evaluator created list), and demographic questionnaire) from 25 total participants. The participants ages ranged from six to 52. The participants were 96% Caucasion and 4% Latin American. Qualitative analysis found the current participant pool engaged in six RRB themes including repetitive behaviors, socially restrictive behaviors, repetitive speech, difficulty with transition, obsessive behaviors, and restricted interests. The current dataset engaged in socially restrictive behaviors and restrictive interests most frequently. Within the main themes 40 subthemes were isolated, defined, and analyzed. Further, preliminary quantitative analysis was run to determine if age impacted camouflaging behaviors and overall presentation of RRBs. Within this dataset this was not founded. Further qualitative data will be run to determine if this dataset engaged in more overt or subtle RRBs to confirm or rebuff previous research. The researcher intends to run SPSS analysis to determine if there was statistical difference between each RRB theme and overall presentation. Secondly, each participant will be analyzed for presentation of RRB, age, and previous diagnoses. Conclusion: The present study aimed to assist in diagnostic clarity. This was achieved by collecting data from a female only participant pool across the lifespan. Current data aided in clarity of the type of RRBs engage in. A limited sample size was a barrier in this study.Keywords: autism spectrum disorder, camouflaging, rigid repetitive behaviors, gender disparity
Procedia PDF Downloads 149