Search results for: expanded invasive weed optimization algorithm (exIWO)
1232 Drying Kinects of Soybean Seeds
Authors: Amanda Rithieli Pereira Dos Santos, Rute Quelvia De Faria, Álvaro De Oliveira Cardoso, Anderson Rodrigo Da Silva, Érica Leão Fernandes Araújo
Abstract:
The study of the kinetics of drying has great importance for the mathematical modeling, allowing to know about the processes of transference of heat and mass between the products and to adjust dryers managing new technologies for these processes. The present work had the objective of studying the kinetics of drying of soybean seeds and adjusting different statistical models to the experimental data varying cultivar and temperature. Soybean seeds were pre-dried in a natural environment in order to reduce and homogenize the water content to the level of 14% (b.s.). Then, drying was carried out in a forced air circulation oven at controlled temperatures of 38, 43, 48, 53 and 58 ± 1 ° C, using two soybean cultivars, BRS 8780 and Sambaíba, until reaching a hygroscopic equilibrium. The experimental design was completely randomized in factorial 5 x 2 (temperature x cultivar) with 3 replicates. To the experimental data were adjusted eleven statistical models used to explain the drying process of agricultural products. Regression analysis was performed using the least squares Gauss-Newton algorithm to estimate the parameters. The degree of adjustment was evaluated from the analysis of the coefficient of determination (R²), the adjusted coefficient of determination (R² Aj.) And the standard error (S.E). The models that best represent the drying kinetics of soybean seeds are those of Midilli and Logarítmico.Keywords: curve of drying seeds, Glycine max L., moisture ratio, statistical models
Procedia PDF Downloads 6301231 Thermodynamic Modeling and Exergoeconomic Analysis of an Isobaric Adiabatic Compressed Air Energy Storage System
Authors: Youssef Mazloum, Haytham Sayah, Maroun Nemer
Abstract:
The penetration of renewable energy sources into the electric grid is significantly increasing. However, the intermittence of these sources breaks the balance between supply and demand for electricity. Hence, the importance of the energy storage technologies, they permit restoring the balance and reducing the drawbacks of intermittence of the renewable energies. This paper discusses the modeling and the cost-effectiveness of an isobaric adiabatic compressed air energy storage (IA-CAES) system. The proposed system is a combination among a compressed air energy storage (CAES) system with pumped hydro storage system and thermal energy storage system. The aim of this combination is to overcome the disadvantages of the conventional CAES system such as the losses due to the storage pressure variation, the loss of the compression heat and the use of fossil fuel sources. A steady state model is developed to perform an energy and exergy analyses of the IA-CAES system and calculate the distribution of the exergy losses in the latter system. A sensitivity analysis is also carried out to estimate the effects of some key parameters on the system’s efficiency, such as the pinch of the heat exchangers, the isentropic efficiency of the rotating machinery and the pressure losses. The conducted sensitivity analysis is a local analysis since the sensibility of each parameter changes with the variation of the other parameters. Therefore, an exergoeconomic study is achieved as well as a cost optimization in order to reduce the electricity cost produced during the production phase. The optimizer used is OmOptim which is a genetic algorithms based optimizer.Keywords: cost-effectiveness, Exergoeconomic analysis, isobaric adiabatic compressed air energy storage (IA-CAES) system, thermodynamic modeling
Procedia PDF Downloads 2471230 The Effect of Electrical Discharge Plasma on Inactivation of Escherichia Coli MG 1655 in Pure Culture
Authors: Zoran Herceg, Višnja Stulić, Anet Režek Jambrak, Tomislava Vukušić
Abstract:
Electrical discharge plasma is a new non-thermal processing technique which is used for the inactivation of contaminating and hazardous microbes in liquids. Plasma is a source of different antimicrobial species including UV photons, charged particles, and reactive species such as superoxide, hydroxyl radicals, nitric oxide and ozone. Escherichia coli was studied as foodborne pathogen. The aim of this work was to examine inactivation effects of electrical discharge plasma treatment on the Escherichia coli MG 1655 in pure culture. Two types of plasma configuration and polarity were used. First configuration was with titanium wire as high voltage needle and another with medical stainless steel needle used to form bubbles in treated volume and titanium wire as high voltage needle. Model solution samples were inoculated with Escerichia coli MG 1655 and treated by electrical discharge plasma at treatment time of 5 and 10 min, and frequency of 60, 90 and 120 Hz. With the first configuration after 5 minutes of treatment at frequency of 120 Hz the inactivation rate was 1.3 log₁₀ reduction and after 10 minutes of treatment the inactivation rate was 3.0 log₁₀ reduction. At the frequency of 90 Hz after 10 minutes inactivation rate was 1.3 log₁₀ reduction. With the second configuration after 5 minutes of treatment at frequency of 120 Hz the inactivation rate was 1.2 log₁₀ reduction and after 10 minutes of treatment the inactivation rate was also 3.0 log₁₀ reduction. In this work it was also examined the formation of biofilm, nucleotide and protein leakage at 260/280 nm, before and after treatment and recuperation of treated samples. Further optimization of method is needed to understand mechanism of inactivation.Keywords: electrical discharge plasma, escherichia coli MG 1655, inactivation, point-to-plate electrode configuration
Procedia PDF Downloads 4331229 A Low Cost Non-Destructive Grain Moisture Embedded System for Food Safety and Quality
Authors: Ritula Thakur, Babankumar S. Bansod, Puneet Mehta, S. Chatterji
Abstract:
Moisture plays an important role in storage, harvesting and processing of food grains and related agricultural products. It is an important characteristic of most agricultural products for maintenance of quality. Accurate knowledge of the moisture content can be of significant value in maintaining quality and preventing contamination of cereal grains. The present work reports the design and development of microcontroller based low cost non-destructive moisture meter, which uses complex impedance measurement method for moisture measurement of wheat using parallel plate capacitor arrangement. Moisture can conveniently be sensed by measuring the complex impedance using a small parallel-plate capacitor sensor filled with the kernels in-between the two plates of sensor, exciting the sensor at 30 KHz and 100 KHz frequencies. The effects of density and temperature variations were compensated by providing suitable compensations in the developed algorithm. The results were compared with standard dry oven technique and the developed method was found to be highly accurate with less than 1% error. The developed moisture meter is low cost, highly accurate, non-destructible method for determining the moisture of grains utilizing the fast computing capabilities of microcontroller.Keywords: complex impedance, moisture content, electrical properties, safety of food
Procedia PDF Downloads 4631228 Biomass and Lipid Enhancement by Response Surface Methodology in High Lipid Accumulating Indigenous Strain Rhodococcus opacus and Biodiesel Study
Authors: Kulvinder Bajwa, Narsi R. Bishnoi
Abstract:
Finding a sustainable alternative for today’s petrochemical industry is a major challenge facing by researchers, scientists, chemical engineers, and society at the global level. Microorganisms are considered to be sustainable feedstock for 3rd generation biofuel production. In this study, we have investigated the potential of a native bacterial strain isolated from a petrol contaminated site for the production of biodiesel. The bacterium was identified to be Rhodococcus opacus by biochemical test and 16S rRNA. Compositional analysis of bacterial biomass has been carried out by Fourier transform infrared spectroscopy (FTIR) in order to confirm lipid profile. Lipid and biomass were optimized by combination with Box Behnken design (BBD) of response surface methodology. The factors selected for the optimization of growth condition were glucose, yeast extract, and ammonium nitrate concentration. The experimental model developed through RSM in terms of effective operational factors (BBD) was found to be suitable to describe the lipid and biomass production, which indicated higher lipid and biomass with a minimum concentration of ammonium nitrate, yeast extract, and quite higher dose of glucose supplementation. Optimum results of the experiments were found to be 2.88 gL⁻¹ biomass and lipid content 38.75% at glucose 20 gL⁻¹, ammonium nitrate 0.5 gL⁻¹ and yeast extract 1.25 gL⁻¹. Furthermore, GCMS study revealed that Rhodococcus opacus has favorable fatty acid profile for biodiesel production.Keywords: biofuel, Oleaginious bacteria, Rhodococcus opacus, FTIR, BBD, free fatty acids
Procedia PDF Downloads 1361227 Opto-Electronic Properties and Structural Phase Transition of Filled-Tetrahedral NaZnAs
Authors: R. Khenata, T. Djied, R. Ahmed, H. Baltache, S. Bin-Omran, A. Bouhemadou
Abstract:
We predict structural, phase transition as well as opto-electronic properties of the filled-tetrahedral (Nowotny-Juza) NaZnAs compound in this study. Calculations are carried out by employing the full potential (FP) linearized augmented plane wave (LAPW) plus local orbitals (lo) scheme developed within the structure of density functional theory (DFT). Exchange-correlation energy/potential (EXC/VXC) functional is treated using Perdew-Burke and Ernzerhof (PBE) parameterization for generalized gradient approximation (GGA). In addition to Trans-Blaha (TB) modified Becke-Johnson (mBJ) potential is incorporated to get better precision for optoelectronic properties. Geometry optimization is carried out to obtain the reliable results of the total energy as well as other structural parameters for each phase of NaZnAs compound. Order of the structural transitions as a function of pressure is found as: Cu2Sb type → β → α phase in our study. Our calculated electronic energy band structures for all structural phases at the level of PBE-GGA as well as mBJ potential point out; NaZnAs compound is a direct (Γ–Γ) band gap semiconductor material. However, as compared to PBE-GGA, mBJ potential approximation reproduces higher values of fundamental band gap. Regarding the optical properties, calculations of real and imaginary parts of the dielectric function, refractive index, reflectivity coefficient, absorption coefficient and energy loss-function spectra are performed over a photon energy ranging from 0.0 to 30.0 eV by polarizing incident radiation in parallel to both [100] and [001] crystalline directions.Keywords: NaZnAs, FP-LAPW+lo, structural properties, phase transition, electronic band-structure, optical properties
Procedia PDF Downloads 4371226 Using Q-Learning to Auto-Tune PID Controller Gains for Online Quadcopter Altitude Stabilization
Authors: Y. Alrubyli
Abstract:
Unmanned Arial Vehicles (UAVs), and more specifically, quadcopters need to be stable during their flights. Altitude stability is usually achieved by using a PID controller that is built into the flight controller software. Furthermore, the PID controller has gains that need to be tuned to reach optimal altitude stabilization during the quadcopter’s flight. For that, control system engineers need to tune those gains by using extensive modeling of the environment, which might change from one environment and condition to another. As quadcopters penetrate more sectors, from the military to the consumer sectors, they have been put into complex and challenging environments more than ever before. Hence, intelligent self-stabilizing quadcopters are needed to maneuver through those complex environments and situations. Here we show that by using online reinforcement learning with minimal background knowledge, the altitude stability of the quadcopter can be achieved using a model-free approach. We found that by using background knowledge instead of letting the online reinforcement learning algorithm wander for a while to tune the PID gains, altitude stabilization can be achieved faster. In addition, using this approach will accelerate development by avoiding extensive simulations before applying the PID gains to the real-world quadcopter. Our results demonstrate the possibility of using the trial and error approach of reinforcement learning combined with background knowledge to achieve faster quadcopter altitude stabilization in different environments and conditions.Keywords: reinforcement learning, Q-leanring, online learning, PID tuning, unmanned aerial vehicle, quadcopter
Procedia PDF Downloads 1781225 EEG Analysis of Brain Dynamics in Children with Language Disorders
Authors: Hamed Alizadeh Dashagholi, Hossein Yousefi-Banaem, Mina Naeimi
Abstract:
Current study established for EEG signal analysis in patients with language disorder. Language disorder can be defined as meaningful delay in the use or understanding of spoken or written language. The disorder can include the content or meaning of language, its form, or its use. Here we applied Z-score, power spectrum, and coherence methods to discriminate the language disorder data from healthy ones. Power spectrum of each channel in alpha, beta, gamma, delta, and theta frequency bands was measured. In addition, intra hemispheric Z-score obtained by scoring algorithm. Obtained results showed high Z-score and power spectrum in posterior regions. Therefore, we can conclude that peoples with language disorder have high brain activity in frontal region of brain in comparison with healthy peoples. Results showed that high coherence correlates with irregularities in the ERP and is often found during complex task, whereas low coherence is often found in pathological conditions. The results of the Z-score analysis of the brain dynamics showed higher Z-score peak frequency in delta, theta and beta sub bands of Language Disorder patients. In this analysis there were activity signs in both hemispheres and the left-dominant hemisphere was more active than the right.Keywords: EEG, electroencephalography, coherence methods, language disorder, power spectrum, z-score
Procedia PDF Downloads 4251224 Real-Time Web Map Service Based on Solar-Powered Unmanned Aerial Vehicle
Authors: Sunghun Jung
Abstract:
The existing web map service providers contract with the satellite operators to update their maps by paying an astronomical amount of money, but the cost could be minimized by operating a cheap and small UAV. In contrast to the satellites, we only need to replace aged battery packs from time to time for the usage of UAVs. Utilizing both a regular camera and an infrared camera mounted on a small, solar-powered, long-endurance, and hoverable UAV, daytime ground surface photographs, and nighttime infrared photographs will be continuously and repeatedly uploaded to the web map server and overlapped with the existing ground surface photographs in real-time. The real-time web map service using a small, solar-powered, long-endurance, and hoverable UAV can also be applied to the surveillance missions, in particular, to detect border area intruders. The improved real-time image stitching algorithm is developed for the graphic map data overlapping. Also, a small home server will be developed to manage the huge size of incoming map data. The map photographs taken at tens or hundreds of kilometers by a UAV would improve the map graphic resolution compared to the map photographs taken at thousands of kilometers by satellites since the satellite photographs are limited by weather conditions.Keywords: long-endurance, real-time web map service (RWMS), solar-powered, unmanned aerial vehicle (UAV)
Procedia PDF Downloads 2761223 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights
Authors: Julian Wise
Abstract:
Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.Keywords: mineral technology, big data, machine learning operations, data lake
Procedia PDF Downloads 1121222 The Malfatti’s Problem in Reuleaux Triangle
Authors: Ching-Shoei Chiang
Abstract:
The Malfatti’s Problem is to ask for fitting 3 circles into a right triangle such that they are tangent to each other, and each circle is also tangent to a pair of the triangle’s side. This problem has been extended to any triangle (called general Malfatti’s Problem). Furthermore, the problem has been extended to have 1+2+…+n circles, we call it extended general Malfatti’s problem, these circles whose tangency graph, using the center of circles as vertices and the edge connect two circles center if these two circles tangent to each other, has the structure as Pascal’s triangle, and the exterior circles of these circles tangent to three sides of the triangle. In the extended general Malfatti’s problem, there are closed-form solutions for n=1, 2, and the problem becomes complex when n is greater than 2. In solving extended general Malfatti’s problem (n>2), we initially give values to the radii of all circles. From the tangency graph and current radii, we can compute angle value between two vectors. These vectors are from the center of the circle to the tangency points with surrounding elements, and these surrounding elements can be the boundary of the triangle or other circles. For each circle C, there are vectors from its center c to its tangency point with its neighbors (count clockwise) pi, i=0, 1,2,..,n. We add all angles between cpi to cp(i+1) mod (n+1), i=0,1,..,n, call it sumangle(C) for circle C. Using sumangle(C), we can reduce/enlarge the radii for all circles in next iteration, until sumangle(C) is equal to 2πfor all circles. With a similar idea, this paper proposed an algorithm to find the radii of circles whose tangency has the structure of Pascal’s triangle, and the exterior circles of these circles are tangent to the unit Realeaux Triangle.Keywords: Malfatti’s problem, geometric constraint solver, computer-aided geometric design, circle packing, data visualization
Procedia PDF Downloads 1331221 Diagnosis and Analysis of Automated Liver and Tumor Segmentation on CT
Authors: R. R. Ramsheeja, R. Sreeraj
Abstract:
For view the internal structures of the human body such as liver, brain, kidney etc have a wide range of different modalities for medical images are provided nowadays. Computer Tomography is one of the most significant medical image modalities. In this paper use CT liver images for study the use of automatic computer aided techniques to calculate the volume of the liver tumor. Segmentation method is used for the detection of tumor from the CT scan is proposed. Gaussian filter is used for denoising the liver image and Adaptive Thresholding algorithm is used for segmentation. Multiple Region Of Interest(ROI) based method that may help to characteristic the feature different. It provides a significant impact on classification performance. Due to the characteristic of liver tumor lesion, inherent difficulties appear selective. For a better performance, a novel proposed system is introduced. Multiple ROI based feature selection and classification are performed. In order to obtain of relevant features for Support Vector Machine(SVM) classifier is important for better generalization performance. The proposed system helps to improve the better classification performance, reason in which we can see a significant reduction of features is used. The diagnosis of liver cancer from the computer tomography images is very difficult in nature. Early detection of liver tumor is very helpful to save the human life.Keywords: computed tomography (CT), multiple region of interest(ROI), feature values, segmentation, SVM classification
Procedia PDF Downloads 5091220 Automatic Detection of Defects in Ornamental Limestone Using Wavelets
Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas
Abstract:
A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.Keywords: automatic detection, defects, fracture lines, wavelets
Procedia PDF Downloads 2491219 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision
Procedia PDF Downloads 1261218 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework
Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim
Abstract:
Background modeling and subtraction in video analysis has been widely proved to be an effective method for moving objects detection in many computer vision applications. Over the past years, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are two of the most frequently occurring issues in the practical situation. This paper presents a new two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean values of RGB color channels. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block-wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the outputs of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate a very competitive performance compared to previous models.Keywords: background subtraction, codebook model, local binary pattern, dynamic background, illumination change
Procedia PDF Downloads 2181217 A Comparative Asessment of Some Algorithms for Modeling and Forecasting Horizontal Displacement of Ialy Dam, Vietnam
Authors: Kien-Trinh Thi Bui, Cuong Manh Nguyen
Abstract:
In order to simulate and reproduce the operational characteristics of a dam visually, it is necessary to capture the displacement at different measurement points and analyze the observed movement data promptly to forecast the dam safety. The accuracy of forecasts is further improved by applying machine learning methods to data analysis progress. In this study, the horizontal displacement monitoring data of the Ialy hydroelectric dam was applied to machine learning algorithms: Gaussian processes, multi-layer perceptron neural networks, and the M5-rules algorithm for modelling and forecasting of horizontal displacement of the Ialy hydropower dam (Vietnam), respectively, for analysing. The database which used in this research was built by collecting time series of data from 2006 to 2021 and divided into two parts: training dataset and validating dataset. The final results show all three algorithms have high performance for both training and model validation, but the MLPs is the best model. The usability of them are further investigated by comparison with a benchmark models created by multi-linear regression. The result show the performance which obtained from all the GP model, the MLPs model and the M5-Rules model are much better, therefore these three models should be used to analyze and predict the horizontal displacement of the dam.Keywords: Gaussian processes, horizontal displacement, hydropower dam, Ialy dam, M5-Rules, multi-layer perception neural networks
Procedia PDF Downloads 2131216 Creation of Ultrafast Ultra-Broadband High Energy Laser Pulses
Authors: Walid Tawfik
Abstract:
The interaction of high intensity ultrashort laser pulses with plasma generates many significant applications, including soft x-ray lasers, time-resolved laser induced plasma spectroscopy LIPS, and laser-driven accelerators. The development in producing of femtosecond down to ten femtosecond optical pulses has facilitates scientists with a vital tool in a variety of ultrashort phenomena, such as high field physics, femtochemistry and high harmonic generation HHG. In this research, we generate a two-octave-wide ultrashort supercontinuum pulses with an optical spectrum extending from 3.5 eV (ultraviolet) to 1.3 eV (near-infrared) using a capillary fiber filled with neon gas. These pulses are formed according to nonlinear self-phase modulation in the neon gas as a nonlinear medium. The investigations of the created pulses were made using spectral phase interferometry for direct electric-field reconstruction (SPIDER). A complete description of the output pulses was considered. The observed characterization of the produced pulses includes the beam profile, the pulse width, and the spectral bandwidth. After reaching optimization conditions, the intensity of the reconstructed pulse autocorrelation function was applied for the shorts pulse duration to achieve transform limited ultrashort pulses with durations below 6-fs energies up to 600μJ. Moreover, the effect of neon pressure variation on the pulse width was examined. The nonlinear self-phase modulation realized to be increased with the pressure of the neon gas. The observed results may lead to an advanced method to control and monitor ultrashort transit interaction in femtochemistry.Keywords: supercontinuum, ultrafast, SPIDER, ultra-broadband
Procedia PDF Downloads 2241215 Determining Factors for Successful Blended Learning in Higher Education: A Qualitative Study
Authors: Pia Wetzl
Abstract:
The learning process of students can be optimized by combining online teaching with face-to-face sessions. So-called blended learning offers extensive flexibility as well as contact opportunities with fellow students and teachers. Furthermore, learning can be individualized and self-regulated. The aim of this article is to investigate which factors are necessary for blended learning to be successful. Semi-structured interviews were conducted with students (N = 60) and lecturers (N = 21) from different disciplines at two German universities. The questions focused on the perception of online, face-to-face and blended learning courses. In addition, questions focused on possible optimization potential and obstacles to practical implementation. The results show that on-site presence is very important for blended learning to be successful. If students do not get to know each other on-site, there is a risk of loneliness during the self-learning phases. This has a negative impact on motivation. From the perspective of the lecturers, the willingness of the students to participate in the sessions on-site is low. Especially when there is no obligation to attend, group work is difficult to implement because the number of students attending is too low. Lecturers would like to see more opportunities from the university and its administration to enforce attendance. In their view, this is the only way to ensure the success of blended learning. In addition, they see the conception of blended learning courses as requiring a great deal of time, which they are not always willing to invest. More incentives are necessary to keep the lecturers motivated to develop engaging teaching material. The study identifies factors that can help teachers conceptualize blended learning. It also provides specific implementation advice and identifies potential impacts. This catalogue has great value for the future-oriented development of courses at universities. Future studies could test its practical use.Keywords: blended learning, higher education, teachers, student learning, qualitative research
Procedia PDF Downloads 691214 3D Geomechanical Model the Best Solution of the 21st Century for Perforation's Problems
Authors: Luis Guiliana, Andrea Osorio
Abstract:
The lack of comprehension of the reservoir geomechanics conditions may cause operational problems that cost to the industry billions of dollars per year. The drilling operations at the Ceuta Field, Area 2 South, Maracaibo Lake, have been very expensive due to problems associated with drilling. The principal objective of this investigation is to develop a 3D geomechanical model in this area, in order to optimize the future drillings in the field. For this purpose, a 1D geomechanical model was built at first instance, following the workflow of the MEM (Mechanical Earth Model), this consists of the following steps: 1) Data auditing, 2) Analysis of drilling events and structural model, 3) Mechanical stratigraphy, 4) Overburden stress, 5) Pore pressure, 6) Rock mechanical properties, 7) Horizontal stresses, 8) Direction of the horizontal stresses, 9) Wellbore stability. The 3D MEM was developed through the geostatistic model of the Eocene C-SUP VLG-3676 reservoir and the 1D MEM. With this data the geomechanical grid was embedded. The analysis of the results threw, that the problems occurred in the wells that were examined were mainly due to wellbore stability issues. It was determined that the stress field change as the stratigraphic column deepens, it is normal to strike-slip at the Middle Miocene and Lower Miocene, and strike-slipe to reverse at the Eocene. In agreement to this, at the level of the Eocene, the most advantageous direction to drill is parallel to the maximum horizontal stress (157º). The 3D MEM allowed having a tridimensional visualization of the rock mechanical properties, stresses and operational windows (mud weight and pressures) variations. This will facilitate the optimization of the future drillings in the area, including those zones without any geomechanics information.Keywords: geomechanics, MEM, drilling, stress
Procedia PDF Downloads 2731213 Hybrid Localization Schemes for Wireless Sensor Networks
Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir
Abstract:
This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks
Procedia PDF Downloads 4701212 Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles
Authors: Nacer Eddine Chelbi, Ayet Bagane, Annie Saleh, Claude Sauvageau, Denis Gingras
Abstract:
As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.Keywords: automated driving, autonomous emergency braking (AEB), autonomous vehicles, certification, evaluation, importance sampling, metropolis-hastings sampling, tests
Procedia PDF Downloads 2891211 Magnetic Cellulase/Halloysite Nanotubes as Biocatalytic System for Converting Agro-Waste into Value-Added Product
Authors: Devendra Sillu, Shekhar Agnihotri
Abstract:
The 'nano-biocatalyst' utilizes an ordered assembling of enzyme on to nanomaterial carriers to catalyze desirable biochemical kinetics and substrate selectivity. The current study describes an inter-disciplinary approach for converting agriculture waste, sugarcane bagasse into D-glucose exploiting halloysite nanotubes (HNTs) decorated cellulase enzyme as nano-biocatalytic system. Cellulase was successfully immobilized on HNTs employing polydopamine as an eco-friendly crosslinker while iron oxide nanoparticles were attached to facilitate magnetic recovery of material. The characterization studies (UV-Vis, TEM, SEM, and XRD) displayed the characteristic features of both cellulase and magnetic HNTs in the resulting nanocomposite. Various factors (i.e., working pH, temp., crosslinker conc., enzyme conc.) which may influence the activity of biocatalytic system were investigated. The experimental design was performed using Response Surface Methodology (RSM) for process optimization. Analyses data demonstrated that the nanobiocatalysts retained 80.30% activity even at elevated temperature (55°C) and excellent storage stabilities after 10 days. The repeated usage of system revealed a remarkable consistent relative activity over several cycles. The immobilized cellulase was employed to decompose agro-waste and the maximum decomposition rate of 67.2 % was achieved. Conclusively, magnetic HNTs can serve as a potential support for enzyme immobilization with long term usage, good efficacy, reusability and easy recovery from solution.Keywords: halloysite nanotubes, enzyme immobilization, cellulase, response surface methodology, magnetic recovery
Procedia PDF Downloads 1331210 Development and Optimization of Colon Targeted Drug Delivery System of Ayurvedic Churna Formulation Using Eudragit L100 and Ethyl Cellulose as Coating Material
Authors: Anil Bhandari, Imran Khan Pathan, Peeyush K. Sharma, Rakesh K. Patel, Suresh Purohit
Abstract:
The purpose of this study was to prepare time and pH dependent release tablets of Ayurvedic Churna formulation and evaluate their advantages as colon targeted drug delivery system. The Vidangadi Churna was selected for this study which contains Embelin and Gallic acid. Embelin is used in Helminthiasis as therapeutic agent. Embelin is insoluble in water and unstable in gastric environment so it was formulated in time and pH dependent tablets coated with combination of two polymers Eudragit L100 and ethyl cellulose. The 150mg of core tablet of dried extract and lactose were prepared by wet granulation method. The compression coating was used in the polymer concentration of 150mg for both the layer as upper and lower coating tablet was investigated. The results showed that no release was found in 0.1 N HCl and pH 6.8 phosphate buffers for initial 5 hours and about 98.97% of the drug was released in pH 7.4 phosphate buffer in total 17 hours. The in vitro release profiles of drug from the formulation could be best expressed first order kinetics as highest linearity (r2= 0.9943). The results of the present study have demonstrated that the time and pH dependent tablets system is a promising vehicle for preventing rapid hydrolysis in gastric environment and improving oral bioavailability of Embelin and Gallic acid for treatment of Helminthiasis.Keywords: embelin, gallic acid, Vidangadi Churna, colon targeted drug delivery
Procedia PDF Downloads 3601209 Path Planning for Orchard Robot Using Occupancy Grid Map in 2D Environment
Authors: Satyam Raikwar, Thomas Herlitzius, Jens Fehrmann
Abstract:
In recent years, the autonomous navigation of orchard and field robots is an emerging technology of the mobile robotics in agriculture. One of the core aspects of autonomous navigation builds upon path planning, which is still a crucial issue. Generally, for simple representation, the path planning for a mobile robot is performed in a two-dimensional space, which creates a path between the start and goal point. This paper presents the automatic path planning approach for robots used in orchards and vineyards using occupancy grid maps with field consideration. The orchards and vineyards are usually structured environment and their topology is assumed to be constant over time; therefore, in this approach, an RGB image of a field is used as a working environment. These images undergone different image processing operations and then discretized into two-dimensional grid matrices. The individual grid or cell of these grid matrices represents the occupancy of the space, whether it is free or occupied. The grid matrix represents the robot workspace for motion and path planning. After the grid matrix is described, a probabilistic roadmap (PRM) path algorithm is used to create the obstacle-free path over these occupancy grids. The path created by this method was successfully verified in the test area. Furthermore, this approach is used in the navigation of the orchard robot.Keywords: orchard robots, automatic path planning, occupancy grid, probabilistic roadmap
Procedia PDF Downloads 1561208 A Data-Mining Model for Protection of FACTS-Based Transmission Line
Authors: Ashok Kalagura
Abstract:
This paper presents a data-mining model for fault-zone identification of flexible AC transmission systems (FACTS)-based transmission line including a thyristor-controlled series compensator (TCSC) and unified power-flow controller (UPFC), using ensemble decision trees. Given the randomness in the ensemble of decision trees stacked inside the random forests model, it provides an effective decision on the fault-zone identification. Half-cycle post-fault current and voltage samples from the fault inception are used as an input vector against target output ‘1’ for the fault after TCSC/UPFC and ‘1’ for the fault before TCSC/UPFC for fault-zone identification. The algorithm is tested on simulated fault data with wide variations in operating parameters of the power system network, including noisy environment providing a reliability measure of 99% with faster response time (3/4th cycle from fault inception). The results of the presented approach using the RF model indicate the reliable identification of the fault zone in FACTS-based transmission lines.Keywords: distance relaying, fault-zone identification, random forests, RFs, support vector machine, SVM, thyristor-controlled series compensator, TCSC, unified power-flow controller, UPFC
Procedia PDF Downloads 4241207 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils
Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha
Abstract:
Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering
Procedia PDF Downloads 3381206 An Adaptive Distributed Incremental Association Rule Mining System
Authors: Adewale O. Ogunde, Olusegun Folorunso, Adesina S. Sodiya
Abstract:
Most existing Distributed Association Rule Mining (DARM) systems are still facing several challenges. One of such challenges that have not received the attention of many researchers is the inability of existing systems to adapt to constantly changing databases and mining environments. In this work, an Adaptive Incremental Mining Algorithm (AIMA) is therefore proposed to address these problems. AIMA employed multiple mobile agents for the entire mining process. AIMA was designed to adapt to changes in the distributed databases by mining only the incremental database updates and using this to update the existing rules in order to improve the overall response time of the DARM system. In AIMA, global association rules were integrated incrementally from one data site to another through Results Integration Coordinating Agents. The mining agents in AIMA were made adaptive by defining mining goals with reasoning and behavioral capabilities and protocols that enabled them to either maintain or change their goals. AIMA employed Java Agent Development Environment Extension for designing the internal agents’ architecture. Results from experiments conducted on real datasets showed that the adaptive system, AIMA performed better than the non-adaptive systems with lower communication costs and higher task completion rates.Keywords: adaptivity, data mining, distributed association rule mining, incremental mining, mobile agents
Procedia PDF Downloads 3931205 CFD Analysis of an Aft Sweep Wing in Subsonic Flow and Making Analogy with Roskam Methods
Authors: Ehsan Sakhaei, Ali Taherabadi
Abstract:
In this study, an aft sweep wing with specific characteristic feature was analysis with CFD method in Fluent software. In this analysis wings aerodynamic coefficient was calculated in different rake angle and wing lift curve slope to rake angle was achieved. Wing section was selected among NACA airfoils version 6. The sweep angle of wing is 15 degree, aspect ratio 8 and taper ratios 0.4. Designing and modeling this wing was done in CATIA software. This model was meshed in Gambit software and its three dimensional analysis was done in Fluent software. CFD methods used here were based on pressure base algorithm. SIMPLE technique was used for solving Navier-Stokes equation and Spalart-Allmaras model was utilized to simulate three dimensional wing in air. Roskam method is one of the common and most used methods for determining aerodynamics parameters in the field of airplane designing. In this study besides CFD analysis, an advanced aircraft analysis was used for calculating aerodynamic coefficient using Roskam method. The results of CFD were compared with measured data acquired from Roskam method and authenticity of relation was evaluated. The results and comparison showed that in linear region of lift curve there is a minor difference between aerodynamics parameter acquired from CFD to relation present by Roskam.Keywords: aft sweep wing, CFD method, fluent, Roskam, Spalart-Allmaras model
Procedia PDF Downloads 5051204 The Importance of Visual Communication in Artificial Intelligence
Authors: Manjitsingh Rajput
Abstract:
Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems.Keywords: visual communication AI, computer vision, visual aid in communication, essence of visual communication.
Procedia PDF Downloads 971203 Heat Sink Optimization for a High Power Wearable Thermoelectric Module
Authors: Zohreh Soleimani, Sally Salome Shahzad, Stamatis Zoras
Abstract:
As a result of current energy and environmental issues, the human body is known as one of the promising candidate for converting wasted heat to electricity (Seebeck effect). Thermoelectric generator (TEG) is one of the most prevalent means of harvesting body heat and converting that to eco-friendly electrical power. However, the uneven distribution of the body heat and its curvature geometry restrict harvesting adequate amount of energy. To perfectly transform the heat radiated by the body into power, the most direct solution is conforming the thermoelectric generators (TEG) with the arbitrary surface of the body and increase the temperature difference across the thermoelectric legs. Due to this, a computational survey through COMSOL Multiphysics is presented in this paper with the main focus on the impact of integrating a flexible wearable TEG with a corrugated shaped heat sink on the module power output. To eliminate external parameters (temperature, air flow, humidity), the simulations are conducted within indoor thermal level and when the wearer is stationary. The full thermoelectric characterization of the proposed TEG fabricated by a wavy shape heat sink has been computed leading to a maximum power output of 25µW/cm2 at a temperature gradient nearly 13°C. It is noteworthy that for the flexibility of the proposed TEG and heat sink, the applicability and efficiency of the module stay high even on the curved surfaces of the body. As a consequence, the results demonstrate the superiority of such a TEG to the most state of the art counterparts fabricated with no heat sink and offer a new train of thought for the development of self-sustained and unobtrusive wearable power suppliers which generate energy from low grade dissipated heat from the body.Keywords: device simulation, flexible thermoelectric module, heat sink, human body heat
Procedia PDF Downloads 151