Search results for: input
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2135

Search results for: input

1685 Tool for Fast Detection of Java Code Snippets

Authors: Tomáš Bublík, Miroslav Virius

Abstract:

This paper presents general results on the Java source code snippet detection problem. We propose the tool which uses graph and sub graph isomorphism detection. A number of solutions for all of these tasks have been proposed in the literature. However, although that all these solutions are really fast, they compare just the constant static trees. Our solution offers to enter an input sample dynamically with the Scripthon language while preserving an acceptable speed. We used several optimizations to achieve very low number of comparisons during the matching algorithm.

Keywords: AST, Java, tree matching, scripthon source code recognition

Procedia PDF Downloads 405
1684 Study of a Crude Oil Desalting Plant of the National Iranian South Oil Company in Gachsaran by Using Artificial Neural Networks

Authors: H. Kiani, S. Moradi, B. Soltani Soulgani, S. Mousavian

Abstract:

Desalting/dehydration plants (DDP) are often installed in crude oil production units in order to remove water-soluble salts from an oil stream. In order to optimize this process, desalting unit should be modeled. In this research, artificial neural network is used to model efficiency of desalting unit as a function of input parameter. The result of this research shows that the mentioned model has good agreement with experimental data.

Keywords: desalting unit, crude oil, neural networks, simulation, recovery, separation

Procedia PDF Downloads 410
1683 An Object-Based Image Resizing Approach

Authors: Chin-Chen Chang, I-Ta Lee, Tsung-Ta Ke, Wen-Kai Tai

Abstract:

Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.

Keywords: energy map, visual saliency, gradient map, seam carving

Procedia PDF Downloads 458
1682 Influential Parameters in Estimating Soil Properties from Cone Penetrating Test: An Artificial Neural Network Study

Authors: Ahmed G. Mahgoub, Dahlia H. Hafez, Mostafa A. Abu Kiefa

Abstract:

The Cone Penetration Test (CPT) is a common in-situ test which generally investigates a much greater volume of soil more quickly than possible from sampling and laboratory tests. Therefore, it has the potential to realize both cost savings and assessment of soil properties rapidly and continuously. The principle objective of this paper is to demonstrate the feasibility and efficiency of using artificial neural networks (ANNs) to predict the soil angle of internal friction (Φ) and the soil modulus of elasticity (E) from CPT results considering the uncertainties and non-linearities of the soil. In addition, ANNs are used to study the influence of different parameters and recommend which parameters should be included as input parameters to improve the prediction. Neural networks discover relationships in the input data sets through the iterative presentation of the data and intrinsic mapping characteristics of neural topologies. General Regression Neural Network (GRNN) is one of the powerful neural network architectures which is utilized in this study. A large amount of field and experimental data including CPT results, plate load tests, direct shear box, grain size distribution and calculated data of overburden pressure was obtained from a large project in the United Arab Emirates. This data was used for the training and the validation of the neural network. A comparison was made between the obtained results from the ANN's approach, and some common traditional correlations that predict Φ and E from CPT results with respect to the actual results of the collected data. The results show that the ANN is a very powerful tool. Very good agreement was obtained between estimated results from ANN and actual measured results with comparison to other correlations available in the literature. The study recommends some easily available parameters that should be included in the estimation of the soil properties to improve the prediction models. It is shown that the use of friction ration in the estimation of Φ and the use of fines content in the estimation of E considerable improve the prediction models.

Keywords: angle of internal friction, cone penetrating test, general regression neural network, soil modulus of elasticity

Procedia PDF Downloads 400
1681 Exact Formulas of the End-To-End Green’s Functions in Non-hermitian Systems

Authors: Haoshu Li, Shaolong Wan

Abstract:

The recent focus has been on directional signal amplification of a signal input at one end of a one-dimensional chain and measured at the other end. The amplification rate is given by the end-to-end Green’s functions of the system. In this work, we derive the exact formulas for the end-to-end Green's functions of non-Hermitian single-band systems. While in the bulk region, it is found that the Green's functions are displaced from the prior established integral formula by O(e⁻ᵇᴸ). The results confirm the correspondence between the signal amplification and the non-Hermitian skin effect.

Keywords: non-Hermitian, Green's function, non-Hermitian skin effect, signal amplification

Procedia PDF Downloads 117
1680 Sediment Transport Monitoring in the Port of Veracruz Expansion Project

Authors: Francisco Liaño-Carrera, José Isaac Ramírez-Macías, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga, Marcos Rangel-Avalos, Adriana Andrea Roldán-Ubando

Abstract:

The construction of most coastal infrastructure developments around the world are usually made considering wave height, current velocities and river discharges; however, little effort has been paid to surveying sediment transport during dredging or the modification to currents outside the ports or marinas during and after the construction. This study shows a complete survey during the construction of one of the largest ports of the Gulf of Mexico. An anchored Acoustic Doppler Current Velocity profiler (ADCP), a towed ADCP and a combination of model outputs were used at the Veracruz port construction in order to describe the hourly sediment transport and current modifications in and out of the new port. Owing to the stability of the system the new port was construction inside Vergara Bay, a low wave energy system with a tidal range of up to 0.40 m. The results show a two-current system pattern within the bay. The north side of the bay has an anticyclonic gyre, while the southern part of the bay shows a cyclonic gyre. Sediment transport trajectories were made every hour using the anchored ADCP, a numerical model and the weekly data obtained from the towed ADCP within the entire bay. The sediment transport trajectories were carefully tracked since the bay is surrounded by coral reef structures which are sensitive to sedimentation rate and water turbidity. The survey shows that during dredging and rock input used to build the wave breaker sediments were locally added (< 2500 m2) and local currents disperse it in less than 4 h. While the river input located in the middle of the bay and the sewer system plant may add more than 10 times this amount during a rainy day or during the tourist season. Finally, the coastal line obtained seasonally with a drone suggests that the southern part of the bay has not been modified by the construction of the new port located in the northern part of the bay, owing to the two subsystem division of the bay.

Keywords: Acoustic Doppler Current Profiler, construction around coral reefs, dredging, port construction, sediment transport monitoring,

Procedia PDF Downloads 204
1679 Assessment of Climate Change Impacts on the Hydrology of Upper Guder Catchment, Upper Blue Nile

Authors: Fikru Fentaw Abera

Abstract:

Climate changes alter regional hydrologic conditions and results in a variety of impacts on water resource systems. Such hydrologic changes will affect almost every aspect of human well-being. The goal of this paper is to assess the impact of climate change on the hydrology of Upper Guder catchment located in northwest of Ethiopia. The GCM derived scenarios (HadCM3 A2a & B2a SRES emission scenarios) experiments were used for the climate projection. The statistical downscaling model (SDSM) was used to generate future possible local meteorological variables in the study area. The down-scaled data were then used as input to the soil and water assessment tool (SWAT) model to simulate the corresponding future stream flow regime in Upper Guder catchment of the Abay River Basin. A semi distributed hydrological model, SWAT was developed and Generalized Likelihood Uncertainty Estimation (GLUE) was utilized for uncertainty analysis. GLUE is linked with SWAT in the Calibration and Uncertainty Program known as SWAT-CUP. Three benchmark periods simulated for this study were 2020s, 2050s and 2080s. The time series generated by GCM of HadCM3 A2a and B2a and Statistical Downscaling Model (SDSM) indicate a significant increasing trend in maximum and minimum temperature values and a slight increasing trend in precipitation for both A2a and B2a emission scenarios in both Gedo and Tikur Inch stations for all three bench mark periods. The hydrologic impact analysis made with the downscaled temperature and precipitation time series as input to the hydrological model SWAT suggested for both A2a and B2a emission scenarios. The model output shows that there may be an annual increase in flow volume up to 35% for both emission scenarios in three benchmark periods in the future. All seasons show an increase in flow volume for both A2a and B2a emission scenarios for all time horizons. Potential evapotranspiration in the catchment also will increase annually on average 3-15% for the 2020s and 7-25% for the 2050s and 2080s for both A2a and B2a emissions scenarios.

Keywords: climate change, Guder sub-basin, GCM, SDSM, SWAT, SWAT-CUP, GLUE

Procedia PDF Downloads 332
1678 Improving the Growth Performance of Beetal Goat Kids Weaned at Various Stages with Various Levels of Dietary Protein in Starter Ration under High Input Feeding System

Authors: Ishaq Kashif, Muhammad Younas, Muhammad Riaz, Mubarak Ali

Abstract:

Poor feeding management during pre-weaning period is one of the factors resulting in compromised growth of Beetal kids fattened for meat purpose. The main reason for this anomaly may be less milk offered to kids and non-serious efforts for its management. This study was planned to find the most appropriate protein level suiting the age of the weaning while shifting animals to high input feeding system. Total of 42 Beetal male kids having 30 (±10), 60 (±10) and 90 (±10) days of age were selected with 16 in each age group. They were designated as G30, G60 and G90, respectively. The weights of animals were; 8±2 kg (G30), 12±2 kg (G60) and 16±2 kg (G90), respectively. All animals were weaned by introducing the total mix feed gradually and withdrawing the milk during the adjustment period of two weeks. The pelleted starter ration (total mix feed) with three various dietary protein levels designated as R1 (16% CP), R2 (20% CP) and R3 (26% CP) were introduced. The control group was reared on the fodder (Maize). The starter rations were iso-caloric and were offered for six-week duration. All animals were exposed to treatment using two-factor factorial (3×3) plus control treatment arrangement under completely randomized design. The data were collected on average daily feed intake (ADFI), average daily gain (ADG), gain to intake ratio, Klieber ratio (KR), body measurements and blood metabolites of kids. The data was analyzed using aov function of R-software. The statistical analysis showed that starter feed protein levels and age of weaning had significant interaction for ADG (P < 0.001), KR (P < 0.001), ADFI (P < 0.05) and blood urea nitrogen (P < 0.05) while serum creatinine and feed conversion had non-significant interaction. The trend analysis revealed that ADG had significant quadratic interaction (P < 0.05) within protein levels and age of weaning. It was found that animals weaned at 30 or 60 days, on R2 diet had better ADG (46.8 gm/day and 87.06 gm/day, respectively) weaned at 60 days of age. The animals weaned at 90 days had best ADG (127 gm/day) with R1. It is concluded that animal weaned at 30 or 40 days required 20% CP for better growth performance while animal at 90 days showed better performance with 16% CP.

Keywords: average daily gain, starter protein levels, weaning age, gain to intake ratio

Procedia PDF Downloads 224
1677 Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique

Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan

Abstract:

In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application.

Keywords: power spectral density, 3D EEG model, brain balancing, kNN

Procedia PDF Downloads 460
1676 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 320
1675 Plasma-Assisted Decomposition of Cyclohexane in a Dielectric Barrier Discharge Reactor

Authors: Usman Dahiru, Faisal Saleem, Kui Zhang, Adam Harvey

Abstract:

Volatile organic compounds (VOCs) are atmospheric contaminants predominantly derived from petroleum spills, solvent usage, agricultural processes, automobile, and chemical processing industries, which can be detrimental to the environment and human health. Environmental problems such as the formation of photochemical smog, organic aerosols, and global warming are associated with VOC emissions. Research showed a clear relationship between VOC emissions and cancer. In recent years, stricter emission regulations, especially in industrialized countries, have been put in place around the world to restrict VOC emissions. Non-thermal plasmas (NTPs) are a promising technology for reducing VOC emissions by converting them into less toxic/environmentally friendly species. The dielectric barrier discharge (DBD) plasma is of interest due to its flexibility, moderate capital cost, and ease of operation under ambient conditions. In this study, a dielectric barrier discharge (DBD) reactor has been developed for the decomposition of cyclohexane (as a VOC model compound) using nitrogen, dry, and humidified air carrier gases. The effect of specific input energy (1.2-3.0 kJ/L), residence time (1.2-2.3 s) and concentration (220-520 ppm) were investigated. It was demonstrated that the removal efficiency of cyclohexane increased with increasing plasma power and residence time. The removal of cyclohexane decreased with increasing cyclohexane inlet concentration at fixed plasma power and residence time. The decomposition products included H₂, CO₂, H₂O, lower hydrocarbons (C₁-C₅) and solid residue. The highest removal efficiency (98.2%) was observed at specific input energy of 3.0 kJ/L and a residence time of 2.3 s in humidified air plasma. The effect of humidity was investigated to determine whether it could reduce the formation of solid residue in the DBD reactor. It was observed that the solid residue completely disappeared in humidified air plasma. Furthermore, the presence of OH radicals due to humidification not only increased the removal efficiency of cyclohexane but also improves product selectivity. This work demonstrates that cyclohexane can be converted to smaller molecules by a dielectric barrier discharge (DBD) non-thermal plasma reactor by varying plasma power (SIE), residence time, reactor configuration, and carrier gas.

Keywords: cyclohexane, dielectric barrier discharge reactor, non-thermal plasma, removal efficiency

Procedia PDF Downloads 109
1674 Connected Objects with Optical Rectenna for Wireless Information Systems

Authors: Chayma Bahar, Chokri Baccouch, Hedi Sakli, Nizar Sakli

Abstract:

Harvesting and transport of optical and radiofrequency signals are a topical subject with multiple challenges. In this paper, we present a Optical RECTENNA system. We propose here a hybrid system solar cell antenna for 5G mobile communications networks. Thus, we propose rectifying circuit. A parametric study is done to follow the influence of load resistance and input power on Optical RECTENNA system performance. Thus, we propose a solar cell antenna structure in the frequency band of future 5G standard in 2.45 GHz bands.

Keywords: antenna, IoT, optical rectenna, solar cell

Procedia PDF Downloads 152
1673 Geographic Information System-Based Map for Best Suitable Place for Cultivating Permanent Trees in South-Lebanon

Authors: Allaw Kamel, Al-Chami Leila

Abstract:

It is important to reduce the human influence on natural resources by identifying an appropriate land use. Moreover, it is essential to carry out the scientific land evaluation. Such kind of analysis allows identifying the main factors of agricultural production and enables decision makers to develop crop management in order to increase the land capability. The key is to match the type and intensity of land use with its natural capability. Therefore; in order to benefit from these areas and invest them to obtain good agricultural production, they must be organized and managed in full. Lebanon suffers from the unorganized agricultural use. We take south Lebanon as a study area, it is the most fertile ground and has a variety of crops. The study aims to identify and locate the most suitable area to cultivate thirteen type of permanent trees which are: apples, avocados, stone fruits in coastal regions and stone fruits in mountain regions, bananas, citrus, loquats, figs, pistachios, mangoes, olives, pomegranates, and grapes. Several geographical factors are taken as criterion for selection of the best location to cultivate. Soil, rainfall, PH, temperature, and elevation are main inputs to create the final map. Input data of each factor is managed, visualized and analyzed using Geographic Information System (GIS). Management GIS tools are implemented to produce input maps capable of identifying suitable areas related to each index. The combination of the different indices map generates the final output map of the suitable place to get the best permanent tree productivity. The output map is reclassified into three suitability classes: low, moderate, and high suitability. Results show different locations suitable for different kinds of trees. Results also reflect the importance of GIS in helping decision makers finding a most suitable location for every tree to get more productivity and a variety in crops.

Keywords: agricultural production, crop management, geographical factors, Geographic Information System, GIS, land capability, permanent trees, suitable location

Procedia PDF Downloads 119
1672 Design and Implementation of Grid-Connected Photovoltaic Inverter

Authors: B. H. Lee

Abstract:

Nowadays, a grid-connected photovoltaic (PV) inverter is adopted in various places like as home, factory, because grid-connected PV inverter can reduce total power consumption by supplying electricity from PV array. In this paper, design and implementation of a 300 W grid-connected PV inverter are described. It is implemented with TI Piccolo DSP core and operated at 100 kHz switching frequency in order to reduce harmonic contents. The maximum operating input voltage is up to 45 V. The characteristics of the designed system that include maximum power point tracking (MPPT), single operation and battery charging are verified by simulation and experimental results.

Keywords: design, grid-connected, implementation, photovoltaic

Procedia PDF Downloads 399
1671 Status of the European Atlas of Natural Radiation

Authors: G. Cinelli, T. Tollefsen, P. Bossew, V. Gruber, R. Braga, M. A. Hernández-Ceballos, M. De Cort

Abstract:

In 2006, the Joint Research Centre (JRC) of the European Commission started the project of the 'European Atlas of Natural Radiation'. The Atlas aims at preparing a collection of maps of Europe displaying the levels of natural radioactivity caused by different sources (indoor and outdoor radon, cosmic radiation, terrestrial radionuclides, terrestrial gamma radiation, etc). The overall goal of the project is to estimate, in geographical resolution, the annual dose that the public may receive from natural radioactivity, combining all the information from the different radiation components. The first map which has been developed is the European map of indoor radon (Rn) since in most cases Rn is the most important contribution to exposure. New versions of the map are realised when new countries join the project or when already participating countries send new data. We show the latest status of this map which currently includes 25 European countries. Second, the JRC has undertaken to map a variable which measures 'what earth delivers' in terms of Rn. The corresponding quantity is called geogenic radon potential (RP). Due to the heterogeneity of data sources across the Europe there is need to develop a harmonized quantity which at the one hand adequately measures or classifies the RP, and on the other hand is suited to accommodate the variety of input data used to estimate this target quantity. Candidates for input quantities which may serve as predictors of the RP, and for which data are available across Europe, to different extent, are Uranium (U) concentration in rocks and soils, soil gas radon and soil permeability, terrestrial gamma dose rate, geological information and indoor data from ground floor. The European Geogenic Radon Map gives the possibility to characterize areas, on European geographical scale, for radon hazard where indoor radon measurements are not available. Parallel to ongoing work on the European Indoor Radon, Geogenic Radon and Cosmic Radiation Maps, we made progress in the development of maps of terrestrial gamma radiation and U, Th and K concentrations in soil and bedrock. We show the first, preliminary map of the terrestrial gamma dose rate, estimated using the data of ambient dose equivalent rate available from the EURDEP system (about 5000 fixed monitoring stations across Europe). Also, the first maps of U, Th, and K concentrations in soil and bedrock are shown in the present work.

Keywords: Europe, natural radiation, mapping, indoor radon

Procedia PDF Downloads 272
1670 Reduced Complexity of ML Detection Combined with DFE

Authors: Jae-Hyun Ro, Yong-Jun Kim, Chang-Bin Ha, Hyoung-Kyu Song

Abstract:

In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, many detection schemes have been developed to improve the error performance and to reduce the complexity. Maximum likelihood (ML) detection has optimal error performance but it has very high complexity. Thus, this paper proposes reduced complexity of ML detection combined with decision feedback equalizer (DFE). The error performance of the proposed detection scheme is higher than the conventional DFE. But the complexity of the proposed scheme is lower than the conventional ML detection.

Keywords: detection, DFE, MIMO-OFDM, ML

Procedia PDF Downloads 583
1669 Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV

Authors: Joyce K. Anastasi, Bernadette Capili

Abstract:

Community Engagement Strategies to Assist with the Development of an RCT Among People Living with HIV Our research team focuses on developing and testing protocols to manage chronic symptoms. For many years, our team designed and implemented symptom management studies for people living with HIV (PLWH). We identify symptoms that are not curative and are not adequately controlled by conventional therapies. As an exemplar, we describe how we successfully engaged PLWH in developing and refining our research feasibility protocol for distal sensory peripheral neuropathy (DSP) associated with HIV. With input from PLWH with DSP, our research received National Institutes of Health (NIH) research funding support. Significance: DSP is one of the most common neurologic complications in HIV. It is estimated that DSP affects 21% to 50% of PLWH. The pathogenesis of DSP in HIV is complex and unclear. Proposed mechanisms include cytokine dysregulation, viral protein-produced neurotoxicity, and mitochondrial dysfunction associated with antiretroviral medications. There are no FDA-approved treatments for DSP in HIV. Purpose: Aims: 1) to explore the impact of DSP on the lives of PLWH, 2) to identify patients’ perspectives on successful treatments for DSP, 3) to identify interventions considered feasible and sensitive to the needs of PLWH with DSP, and 4) to obtain participant input for protocol/study design. Description of Process: We conducted a needs assessment with PLWH with DSP. From our needs assessment, we learned from the patients’ perspective detailed descriptions of their symptoms; physical functioning with DSP; self-care remedies tried, and desired interventions. We also asked about protocol scheduling, instrument clarity, study compensation, study-related burdens, and willingness to participate in a randomized controlled trial (RCT) with a placebo and a waitlist group. Implications: We incorporated many of the suggestions learned from the need assessment. We developed and completed a feasibility study that provided us with invaluable information that informed subsequent NIH-funded studies. In addition to our extensive clinical and research experience working with PLWH, learning from the patient perspective helped in developing our protocol and promoting a successful plan for recruitment and retention of study participants.

Keywords: clinical trial development, peripheral neuropathy, traditional medicine, HIV, AIDS

Procedia PDF Downloads 61
1668 Using AI Based Software as an Assessment Aid for University Engineering Assignments

Authors: Waleed Al-Nuaimy, Luke Anastassiou, Manjinder Kainth

Abstract:

As the process of teaching has evolved with the advent of new technologies over the ages, so has the process of learning. Educators have perpetually found themselves on the lookout for new technology-enhanced methods of teaching in order to increase learning efficiency and decrease ever expanding workloads. Shortly after the invention of the internet, web-based learning started to pick up in the late 1990s and educators quickly found that the process of providing learning material and marking assignments could change thanks to the connectivity offered by the internet. With the creation of early web-based virtual learning environments (VLEs) such as SPIDER and Blackboard, it soon became apparent that VLEs resulted in higher reported computer self-efficacy among students, but at the cost of students being less satisfied with the learning process . It may be argued that the impersonal nature of VLEs, and their limited functionality may have been the leading factors contributing to this reported dissatisfaction. To this day, often faced with the prospects of assigning colossal engineering cohorts their homework and assessments, educators may frequently choose optimally curated assessment formats, such as multiple-choice quizzes and numerical answer input boxes, so that automated grading software embedded in the VLEs can save time and mark student submissions instantaneously. A crucial skill that is meant to be learnt during most science and engineering undergraduate degrees is gaining the confidence in using, solving and deriving mathematical equations. Equations underpin a significant portion of the topics taught in many STEM subjects, and it is in homework assignments and assessments that this understanding is tested. It is not hard to see that this can become challenging if the majority of assignment formats students are engaging with are multiple-choice questions, and educators end up with a reduced perspective of their students’ ability to manipulate equations. Artificial intelligence (AI) has in recent times been shown to be an important consideration for many technologies. In our paper, we explore the use of new AI based software designed to work in conjunction with current VLEs. Using our experience with the software, we discuss its potential to solve a selection of problems ranging from impersonality to the reduction of educator workloads by speeding up the marking process. We examine the software’s potential to increase learning efficiency through its features which claim to allow more customized and higher-quality feedback. We investigate the usability of features allowing students to input equation derivations in a range of different forms, and discuss relevant observations associated with these input methods. Furthermore, we make ethical considerations and discuss potential drawbacks to the software, including the extent to which optical character recognition (OCR) could play a part in the perpetuation of errors and create disagreements between student intent and their submitted assignment answers. It is the intention of the authors that this study will be useful as an example of the implementation of AI in a practical assessment scenario insofar as serving as a springboard for further considerations and studies that utilise AI in the setting and marking of science and engineering assignments.

Keywords: engineering education, assessment, artificial intelligence, optical character recognition (OCR)

Procedia PDF Downloads 103
1667 Implementation of 4-Bit Direct Charge Transfer Switched Capacitor DAC with Mismatch Shaping Technique

Authors: Anuja Askhedkar, G. H. Agrawal, Madhu Gudgunti

Abstract:

Direct Charge Transfer Switched Capacitor (DCT-SC) DAC is the internal DAC used in Delta-Sigma (∆∑) DAC which works on Over-Sampling concept. The Switched Capacitor DAC mainly suffers from mismatch among capacitors. Mismatch among capacitors in DAC, causes non linearity between output and input. Dynamic Element Matching (DEM) technique is used to match the capacitors. According to element selection logic there are many types. In this paper, Data Weighted Averaging (DWA) technique is used for mismatch shaping. In this paper, the 4 bit DCT-SC-DAC with DWA-DEM technique is implemented using WINSPICE simulation software in 180nm CMOS technology. DNL for DAC with DWA is ±0.03 LSB and INL is ± 0.02LSB.

Keywords: ∑-Δ DAC, DCT-SC-DAC, mismatch shaping, DWA, DEM

Procedia PDF Downloads 328
1666 The Positive Effects of Processing Instruction on the Acquisition of French as a Second Language: An Eye-Tracking Study

Authors: Cecile Laval, Harriet Lowe

Abstract:

Processing Instruction is a psycholinguistic pedagogical approach drawing insights from the Input Processing Model which establishes the initial innate strategies used by second language learners to connect form and meaning of linguistic features. With the ever-growing use of technology in Second Language Acquisition research, the present study uses eye-tracking to measure the effectiveness of Processing Instruction in the acquisition of French and its effects on learner’s cognitive strategies. The experiment was designed using a TOBII Pro-TX300 eye-tracker to measure participants’ default strategies when processing French linguistic input and any cognitive changes after receiving Processing Instruction treatment. Participants were drawn from lower intermediate adult learners of French at the University of Greenwich and randomly assigned to two groups. The study used a pre-test/post-test methodology. The pre-tests (one per linguistic item) were administered via the eye-tracker to both groups one week prior to instructional treatment. One group received full Processing Instruction treatment (explicit information on the grammatical item and on the processing strategies, and structured input activities) on the primary target linguistic feature (French past tense imperfective aspect). The second group received Processing Instruction treatment except the explicit information on the processing strategies. Three immediate post-tests on the three grammatical structures under investigation (French past tense imperfective aspect, French Subjunctive used for the expression of doubt, and the French causative construction with Faire) were administered with the eye-tracker. The eye-tracking data showed the positive change in learners’ processing of the French target features after instruction with improvement in the interpretation of the three linguistic features under investigation. 100% of participants in both groups made a statistically significant improvement (p=0.001) in the interpretation of the primary target feature (French past tense imperfective aspect) after treatment. 62.5% of participants made an improvement in the secondary target item (French Subjunctive used for the expression of doubt) and 37.5% of participants made an improvement in the cumulative target feature (French causative construction with Faire). Statistically there was no significant difference between the pre-test and post-test scores in the cumulative target feature; however, the variance approximately tripled between the pre-test and the post-test (3.9 pre-test and 9.6 post-test). This suggests that the treatment does not affect participants homogenously and implies a role for individual differences in the transfer-of-training effect of Processing Instruction. The use of eye-tracking provides an opportunity for the study of unconscious processing decisions made during moment-by-moment comprehension. The visual data from the eye-tracking demonstrates changes in participants’ processing strategies. Gaze plots from pre- and post-tests display participants fixation points changing from focusing on content words to focusing on the verb ending. This change in processing strategies can be clearly seen in the interpretation of sentences in both primary and secondary target features. This paper will present the research methodology, design and results of the experimental study using eye-tracking to investigate the primary effects and transfer-of-training effects of Processing Instruction. It will then provide evidence of the cognitive benefits of Processing Instruction in Second Language Acquisition and offer suggestion in second language teaching of grammar.

Keywords: eye-tracking, language teaching, processing instruction, second language acquisition

Procedia PDF Downloads 260
1665 Organic Geochemistry and Oil-Source Correlation of Cretaceous Sediments in the Kohat Basin, Pakistan

Authors: Syed Mamoon Siyar, Fayaz Ali, Sajjad Ahmad, Samina Jahandad, George Kontakiotis, Hammad T. Janjuhah, Assimina Antonarakou, Waqas Naseem

Abstract:

The Cretaceous Chichali Formation in the Chanda-01, Chanda-02, Chanda-03 and Mela-05 wells and the oil samples from Chanda-01 and Chanda-01 wells located in the Kohat Basin, Pakistan, were analyzed with the objectives of evaluating the hydrocarbon generation potential, source, thermal maturity and depositional of organic matter, and oil-source correlation by employing geochemical screening techniques and biomarker studies. The total organic carbon (TOC) values in Chanda-02, Chanda-03 and Mela-05 indicate, in general, poor to fair, fair and fair to good source rock potential with low genetic potential, respectively. The nature of organic matter has been determined by standard cross plots of Rock Eval pyrolysis parameters, indicating that studied cuttings from the Chichali Formation dominantly contain type III kerogen at present and show maturity for oil generation in the studied wells. The organic petrographic study also confirmed the vitrinite (type III) as a major maceral in the investigated Chichali Shales and its reflectance values show maturity for oil. The different ratios of non-biomarkers and biomarkers i.e., steranes, terpenes and aromatics parameters, indicate the marine source of organic matter deposited in the anoxic environment for the Chichali Formation in Chanda-01 and Chanda-02 wells and mixed source input of organic matter deposited in suboxic conditions for oil in the same wells. The CPI, and different biomarkers parameters such as C29 S/S+R, ββ/αα+ββ), M29/H30, Ts/Ts+Tm, H31 (S/S+R) and aromatic compounds methyl phenanthrene index (MPI) and organic petrographic analysis (vitrinite reflectance) suggest mature stage of oil generation for Chichali Shales and oil samples in the study area with little high thermal maturity in case of oils. Based on source and thermal maturity biomarkers and non-biomarkers parameters, the produced oils have no correlation with the Cretaceous Chichali Formation in the studied Chanda-01 and Chanda-02 wells in Kohat Basin, Pakistan, but it has been suggested that these oils have been generated by the strata containing high terrestrial organic input compare to Chichali Shales.

Keywords: Organic geochemistry, Chichali Shales and crude oils, Kohat Basin, Pakistan

Procedia PDF Downloads 56
1664 PSS and SVC Controller Design by BFA to Enhance the Power System Stability

Authors: Saeid Jalilzadeh

Abstract:

Designing of PSS and SVC controller based on Bacterial Foraging Algorithm (BFA) to improve the stability of power system is proposed in this paper. Same controllers for PSS and SVC has been considered and Single machine infinite bus (SMIB) system with SVC located at the terminal of generator is used to evaluate the proposed controllers. BFA is used to optimize the coefficients of the controllers. Finally simulation for a special disturbance as an input power of generator with the proposed controllers in order to investigate the dynamic behavior of generator is done. The simulation results demonstrate that the system composed with optimized controllers has an outstanding operation in fast damping of oscillations of power system.

Keywords: PSS, SVC, SMIB, optimize controller

Procedia PDF Downloads 432
1663 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 49
1662 Cultivating a Successful Academic Career in Higher Education Institutes: The 10 X C Model

Authors: S. Zamir

Abstract:

The modern era has brought with it significant organizational changes. These changes have not bypassed the academic world, and along with the old academic bonds that include a world of knowledge and ethics, academic faculty members are required more than ever not only to survive in the academic world, but also to thrive and flourish and position themselves as modern and opinionated academicians. Based upon the writings of organizational consultants, the article suggests a 10 X C model for cultivating an academic backbone, as well as emphasizing its input to the professional growth of university and college academics: Competence, Calculations of pain & gain, Character, Commitment, Communication, Curiosity, Coping, Courage, Collaboration and Celebration.

Keywords: academic career, academicians, higher education, the 10xC model

Procedia PDF Downloads 232
1661 Model Predictive Controller for Pasteurization Process

Authors: Tesfaye Alamirew Dessie

Abstract:

Our study focuses on developing a Model Predictive Controller (MPC) and evaluating it against a traditional PID for a pasteurization process. Utilizing system identification from the experimental data, the dynamics of the pasteurization process were calculated. Using best fit with data validation, residual, and stability analysis, the quality of several model architectures was evaluated. The validation data fit the auto-regressive with exogenous input (ARX322) model of the pasteurization process by roughly 80.37 percent. The ARX322 model structure was used to create MPC and PID control techniques. After comparing controller performance based on settling time, overshoot percentage, and stability analysis, it was found that MPC controllers outperform PID for those parameters.

Keywords: MPC, PID, ARX, pasteurization

Procedia PDF Downloads 131
1660 Identification of Nonlinear Systems Structured by Hammerstein-Wiener Model

Authors: A. Brouri, F. Giri, A. Mkhida, A. Elkarkri, M. L. Chhibat

Abstract:

Standard Hammerstein-Wiener models consist of a linear subsystem sandwiched by two memoryless nonlinearities. Presently, the linear subsystem is allowed to be parametric or not, continuous- or discrete-time. The input and output nonlinearities are polynomial and may be noninvertible. A two-stage identification method is developed such the parameters of all nonlinear elements are estimated first using the Kozen-Landau polynomial decomposition algorithm. The obtained estimates are then based upon in the identification of the linear subsystem, making use of suitable pre-ad post-compensators.

Keywords: nonlinear system identification, Hammerstein-Wiener systems, frequency identification, polynomial decomposition

Procedia PDF Downloads 484
1659 Optimum Design of Helical Gear System on Basis of Maximum Power Transmission Capability

Authors: Yasaman Esfandiari

Abstract:

Mechanical engineering has always dealt with amplification of the input power in power trains. One of the ways to achieve this goal is to use gears to change the amplitude and direction of the torque and the speed. However, the gears should be optimally designed to best achieve these objectives. In this study, helical gear systems are optimized to achieve maximum power. Material selection, space restriction, available facilities for manufacturing, the probability of tooth breakage, and tooth wear are taken into account and governing equations are derived. Finally, a Matlab code was generated to solve the optimization problem and the results are verified.

Keywords: design, gears, Matlab, optimization

Procedia PDF Downloads 223
1658 On the Application of Heuristics of the Traveling Salesman Problem for the Task of Restoring the DNA Matrix

Authors: Boris Melnikov, Dmitrii Chaikovskii, Elena Melnikova

Abstract:

The traveling salesman problem (TSP) is a well-known optimization problem that seeks to find the shortest possible route that visits a set of points and returns to the starting point. In this paper, we apply some heuristics of the TSP for the task of restoring the DNA matrix. This restoration problem is often considered in biocybernetics. For it, we must recover the matrix of distances between DNA sequences if not all the elements of the matrix under consideration are known at the input. We consider the possibility of using this method in the testing of distance calculation algorithms between a pair of DNAs to restore the partially filled matrix.

Keywords: optimization problems, DNA matrix, partially filled matrix, traveling salesman problem, heuristic algorithms

Procedia PDF Downloads 126
1657 Analysis of DNA from Fired Cartridge Casings

Authors: S. Mawlood, L. Denanny, N. Watson, B. Pickard

Abstract:

DNA analysis has been widely accepted as providing valuable evidence concerning the identity of the source of biological traces. Our work has showed that DNA samples can survive on cartridges even after firing. The study also raised the possibility of determining other information such as the age of the donor. Such information may be invaluable in certain cases where spent cartridges from automatic weapons are left behind at the scene of a crime. In spite of the nature of touch evidence and exposure to high chamber temperatures during shooting, we were still capable to retrieve enough DNA for profile typing. In order to estimate age of contributor, DNA methylation levels were analyzed using EpiTect system for retrieved DNA. However, results were not conclusive, due to low amount of input DNA.

Keywords: DNA profile, DNA Methylation, fired cartridge, touch sample

Procedia PDF Downloads 417
1656 Adversarial Attacks and Defenses on Deep Neural Networks

Authors: Jonathan Sohn

Abstract:

Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.

Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning

Procedia PDF Downloads 163