Search results for: optimization algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4713

Search results for: optimization algorithms

813 Solar PV System for Automatic Guideway Transit (AGT) System in BPSU Main Campus

Authors: Nelson S. Andres, Robert O. Aguilar, Mar O. Tapia, Meeko C. Masangcap, John Denver Catapang, Greg C. Mallari

Abstract:

This study focuses on exploring the possibility of using solar PV as an alternative for generating electricity to electrify the AGT System installed in BPSU Main Campus instead of using the power grid. The output of this study gives BPSU the option to invest on solar PV system to pro-actively respond to one of UN’s Sustainable Development Goals of having reliable, sustainable and modern energy sources to reduce energy pollution and climate change impact in the long run. Thus, this study covers the technical as well as the financial studies, which BPSU can also be used to outsource funding from different government agencies. For this study, the electrical design and requirements of the on-going DOST AGT system project are carefully considered. In the proposed design, the AGT station has installed with a rechargeable battery system where the energy harnessed by the solar PV panels installed on the rooftop of the station/NCEA building shall be directed to. The solar energy is then directly supplied to the electric double-layer capacitors (EDLC's) batteries and thus transmitted to other types of equipment in need. When the AGT is not in use, the harnessed energy may be used by NCEA building, thus, lessening the energy consumption of the building from the grid. The use of solar PV system with EDLC is compared with the use of an electric grid for the purpose of electrifying the AGT or the NCEA building (when AGT is not in use). This is to figure how much solar energy are accumulated by the solar PV to accommodate the need for coaches’ motors, lighting, air-conditioning units, door sensor, panel display, etc. The proposed PV Solar design, as well as the data regarding the charging and discharging of batteries and the power consumption of all AGT components, are simulated for optimization, analysis and validation through the use of PVSyst software.

Keywords: AGT, Solar PV, railway, EDLC

Procedia PDF Downloads 63
812 Evaluation of Forming Properties on AA 5052 Aluminium Alloy by Incremental Forming

Authors: A. Anbu Raj, V. Mugendiren

Abstract:

Sheet metal forming is a vital manufacturing process used in automobile, aerospace, agricultural industries, etc. Incremental forming is a promising process providing a short and inexpensive way of forming complex three-dimensional parts without using die. The aim of this research is to study the forming behaviour of AA 5052, Aluminium Alloy, using incremental forming and also to study the FLD of cone shape AA 5052 Aluminium Alloy at room temperature and various annealing temperature. Initially the surface roughness and wall thickness through incremental forming on AA 5052 Aluminium Alloy sheet at room temperature is optimized by controlling the effects of forming parameters. The central composite design (CCD) was utilized to plan the experiment. The step depth, feed rate, and spindle speed were considered as input parameters in this study. The surface roughness and wall thickness were used as output response. The process performances such as average thickness and surface roughness were evaluated. The optimized results are taken for minimum surface roughness and maximum wall thickness. The optimal results are determined based on response surface methodology and the analysis of variance. Formability Limit Diagram is constructed on AA 5052 Aluminium Alloy at room temperature and various annealing temperature by using optimized process parameters from the response surface methodology. The cone has higher formability than the square pyramid and higher wall thickness distribution. Finally the FLD on cone shape and square pyramid shape at room temperature and the various annealing temperature is compared experimentally and simulated with Abaqus software.

Keywords: incremental forming, response surface methodology, optimization, wall thickness, surface roughness

Procedia PDF Downloads 319
811 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 65
810 Integrated Modeling of Transformation of Electricity and Transportation Sectors: A Case Study of Australia

Authors: T. Aboumahboub, R. Brecha, H. B. Shrestha, U. F. Hutfilter, A. Geiges, W. Hare, M. Schaeffer, L. Welder, M. Gidden

Abstract:

The proposed stringent mitigation targets require an immediate start for a drastic transformation of the whole energy system. The current Australian energy system is mainly centralized and fossil fuel-based in most states with coal and gas-fired plants dominating the total produced electricity over the recent past. On the other hand, the country is characterized by a huge, untapped renewable potential, where wind and solar energy could play a key role in the decarbonization of the Australia’s future energy system. However, integrating high shares of such variable renewable energy sources (VRES) challenges the power system considerably due to their temporal fluctuations and geographical dispersion. This raises the concerns about flexibility gap in the system to ensure the security of supply with increasing shares of such intermittent sources. One main flexibility dimension to facilitate system integration of high shares of VRES is to increase the cross-sectoral integration through coupling of electricity to other energy sectors alongside the decarbonization of the power sector and reinforcement of the transmission grid. This paper applies a multi-sectoral energy system optimization model for Australia. We investigate the cost-optimal configuration of a renewable-based Australian energy system and its transformation pathway in line with the ambitious range of proposed climate change mitigation targets. We particularly analyse the implications of linking the electricity and transport sectors in a prospective, highly renewable Australian energy system.

Keywords: decarbonization, energy system modelling, renewable energy, sector coupling

Procedia PDF Downloads 119
809 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 138
808 Environmental Protection by Optimum Utilization of Car Air Conditioners

Authors: Sanchita Abrol, Kunal Rana, Ankit Dhir, S. K. Gupta

Abstract:

According to N.R.E.L.’s findings, 700 crore gallons of petrol is used annually to run the air conditioners of passenger vehicles (nearly 6% of total fuel consumption in the USA). Beyond fuel use, the Environmental Protection Agency reported that refrigerant leaks from auto air conditioning units add an additional 5 crore metric tons of carbon emissions to the atmosphere each year. The objective of our project is to deal with this vital issue by carefully modifying the interiors of a car thereby increasing its mileage and the efficiency of its engine. This would consequently result in a decrease in tail emission and generated pollution along with improved car performance. An automatic mechanism, deployed between the front and the rear seats, consisting of transparent thermal insulating sheet/curtain, would roll down as per the requirement of the driver in order to optimize the volume for effective air conditioning, when travelling alone or with a person. The reduction in effective volume will yield favourable results. Even on a mild sunny day, the temperature inside a parked car can quickly spike to life-threatening levels. For a stationary parked car, insulation would be provided beneath its metal body so as to reduce the rate of heat transfer and increase the transmissivity. As a result, the car would not require a large amount of air conditioning for maintaining lower temperature, which would provide us similar benefits. Authors established the feasibility studies, system engineering and primarily theoretical and experimental results confirming the idea and motivation to fabricate and test the actual product.

Keywords: automation, car, cooling insulating curtains, heat optimization, insulation, reduction in tail emission, mileage

Procedia PDF Downloads 254
807 Fuzzy Logic for Control and Automatic Operation of Natural Ventilation in Buildings

Authors: Ekpeti Bukola Grace, Mahmoudi Sabar Esmail, Chaer Issa

Abstract:

Global energy consumption has been increasing steadily over the last half - century, and this trend is projected to continue. As energy demand rises in many countries throughout the world due to population growth, natural ventilation in buildings has been identified as a viable option for lowering these demands, saving costs, and also lowering CO2 emissions. However, natural ventilation is driven by forces that are generally unpredictable in nature thus, it is important to manage the resulting airflow in order to maintain pleasant indoor conditions, making it a complex system that necessitates specific control approaches. The effective application of fuzzy logic technique amidst other intelligent systems is one of the best ways to bridge this gap, as its control dynamics relates more to human reasoning and linguistic descriptions. This article reviewed existing literature and presented practical solutions by applying fuzzy logic control with optimized techniques, selected input parameters, and expert rules to design a more effective control system. The control monitors used indoor temperature, outdoor temperature, carbon-dioxide levels, wind velocity, and rain as input variables to the system, while the output variable remains the control of window opening. This is achieved through the use of fuzzy logic control tool box in MATLAB and running simulations on SIMULINK to validate the effectiveness of the proposed system. Comparison analysis model via simulation is carried out, and with the data obtained, an improvement in control actions and energy savings was recorded.

Keywords: fuzzy logic, intelligent control systems, natural ventilation, optimization

Procedia PDF Downloads 105
806 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 321
805 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model

Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis

Abstract:

Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).

Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry

Procedia PDF Downloads 202
804 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions

Authors: Oscar E. Cariceo, Claudia V. Casal

Abstract:

Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.

Keywords: cyberbullying, evidence based practice, machine learning, social work research

Procedia PDF Downloads 151
803 Chemical Study of Volatile Organic Compounds (VOCS) from Xylopia aromatica (LAM.) Mart (Annonaceae)

Authors: Vanessa G. P. Severino, JOÃO Gabriel M. Junqueira, Michelle N. G. do Nascimento, Francisco W. B. Aquino, João B. Fernandes, Ana P. Terezan

Abstract:

The scientific interest in analyzing VOCs represents a significant modern research field as a result of importance in most branches of the present life and industry. Therefore it is extremely important to investigate, identify and isolate volatile substances, since they can be used in different areas, such as food, medicine, cosmetics, perfumery, aromatherapy, pesticides, repellents and other household products through methods for extracting volatile constituents, such as solid phase microextraction (SPME), hydrodistillation (HD), solvent extraction (SE), Soxhlet extraction, supercritical fluid extraction (SFE), stream distillation (SD) and vacuum distillation (VD). The Chemometrics is an area of chemistry that uses statistical and mathematical tools for the planning and optimization of the experimental conditions, and to extract relevant chemical information multivariate chemical data. In this context, the focus of this work was the study of the chemical VOCs by SPME of the specie X. aromatica, in search of constituents that can be used in the industrial sector as well as in food, cosmetics and perfumery, since these areas industrial has a considerable role. In addition, by chemometric analysis, we sought to maximize the answers of this research, in order to search for the largest number of compounds. The investigation of flowers from X. aromatica in vitro and in alive mode proved consistent, but certain factors supposed influence the composition of metabolites, and the chemometric analysis strengthened the analysis. Thus, the study of the chemical composition of X. aromatica contributed to the VOCs knowledge of the species and a possible application.

Keywords: chemometrics, flowers, HS-SPME, Xylopia aromatica

Procedia PDF Downloads 337
802 Study of Methods to Reduce Carbon Emissions in Structural Engineering

Authors: Richard Krijnen, Alan Wang

Abstract:

As the world is aiming to reach net zero around 2050, structural engineers must begin finding solutions to contribute to this global initiative. Approximately 40% of global energy-related emissions are due to buildings and construction, and a building’s structure accounts for 50% of its embodied carbon, which indicates that structural engineers are key contributors to finding solutions to reach carbon neutrality. However, this task presents a multifaceted challenge as structural engineers must navigate technical, safety and economic considerations while striving to reduce emissions. This study reviews several options and considerations to reduce carbon emissions that structural engineers can use in their future designs without compromising the structural integrity of their proposed design. Low-carbon structures should adhere to several guiding principles. Firstly, prioritize the selection of materials with low carbon footprints, such as recyclable or alternative materials. Optimization of design and engineering methods is crucial to minimize material usage. Encouraging the use of recyclable and renewable materials reduces dependency on natural resources. Energy efficiency is another key consideration involving the design of structures to minimize energy consumption across various systems. Choosing local materials and minimizing transportation distances help in reducing carbon emissions during transport. Innovation, such as pre-fabrication and modular design or low-carbon concrete, can further cut down carbon emissions during manufacturing and construction. Collaboration among stakeholders and sharing experiences and resources are essential for advancing the development and application of low-carbon structures. This paper identifies current available tools and solutions to reduce embodied carbon in structures, which can be used as part of daily structural engineering practice.

Keywords: efficient structural design, embodied carbon, low-carbon material, sustainable structural design

Procedia PDF Downloads 16
801 Human-Automation Interaction in Law: Mapping Legal Decisions and Judgments, Cognitive Processes, and Automation Levels

Authors: Dovile Petkeviciute-Barysiene

Abstract:

Legal technologies not only create new ways for accessing and providing legal services but also transform the role of legal practitioners. Both lawyers and users of legal services expect automated solutions to outperform people with objectivity and impartiality. Although fairness of the automated decisions is crucial, research on assessing various characteristics of automated processes related to the perceived fairness has only begun. One of the major obstacles to this research is the lack of comprehensive understanding of what legal actions are automated and could be meaningfully automated, and to what extent. Neither public nor legal practitioners oftentimes cannot envision technological input due to the lack of general without illustrative examples. The aim of this study is to map decision making stages and automation levels which are and/or could be achieved in legal actions related to pre-trial and trial processes. Major legal decisions and judgments are identified during the consultations with legal practitioners. The dual-process model of information processing is used to describe cognitive processes taking place while making legal decisions and judgments during pre-trial and trial action. Some of the existing legal technologies are incorporated into the analysis as well. Several published automation level taxonomies are considered because none of them fit well into the legal context, as they were all created for avionics, teleoperation, unmanned aerial vehicles, etc. From the information processing perspective, analysis of the legal decisions and judgments expose situations that are most sensitive to cognitive bias, among others, also help to identify areas that would benefit from the automation the most. Automation level analysis, in turn, provides a systematic approach to interaction and cooperation between humans and algorithms. Moreover, an integrated map of legal decisions and judgments, information processing characteristics, and automation levels all together provide some groundwork for the research of legal technology perceived fairness and acceptance. Acknowledgment: This project has received funding from European Social Fund (project No 09.3.3-LMT-K-712-19-0116) under grant agreement with the Research Council of Lithuania (LMTLT).

Keywords: automation levels, information processing, legal judgment and decision making, legal technology

Procedia PDF Downloads 115
800 Formulation and Optimization of Topical 5-Fluorouracil Microemulsions Using Central Compisite Design

Authors: Sudhir Kumar, V. R. Sinha

Abstract:

Water in oil topical microemulsions of 5-FU were developed and optimized using face centered central composite design. Topical w/o microemulsion of 5-FU were prepared using sorbitan monooleate (Span 80), polysorbate 80 (Tween 80), with different oils such as oleic acid (OA), triacetin (TA), and isopropyl myristate (IPM). The ternary phase diagrams designated the microemulsion region and face centered central composite design helped in determining the effects of selected variables viz. type of oil, smix ratio and water concentration on responses like drug content, globule size and viscosity of microemulsions. The CCD design exhibited that the factors have statistically significant effects (p<0.01) on the selected responses. The actual responses showed excellent agreement with the predicted values as suggested by the CCD with lower residual standard error. Similarly, the optimized values were found within the range as predicted by the model. Furthermore, other characteristics of microemulsions like pH, conductivity were investigated. For the optimized microemulsion batch, ex-vivo skin flux, skin irritation and retention studies were performed and compared with marketed 5-FU formulation. In ex vivo skin permeation studies, higher skin retention of drug and minimal flux was achieved for optimized microemulsion batch then the marketed cream. Results confirmed the actual responses to be in agreement with predicted ones with least residual standard errors. Controlled release of drug was achieved for the optimized batch with higher skin retention of 5-FU, which can further be utilized for the treatment of many dermatological disorders.

Keywords: 5-FU, central composite design, microemulsion, ternanry phase diagram

Procedia PDF Downloads 459
799 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping

Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.

Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM

Procedia PDF Downloads 70
798 Levansucrase from Zymomonas Mobilis KIBGE-IB14: Production Optimization and Characterization for High Enzyme Yield

Authors: Sidra Shaheen, Nadir Naveed Siddiqui, Shah Ali Ul Qader

Abstract:

In recent years, significant progress has been made in discovering and developing new bacterial polysaccharides producing organisms possessing extremely functional properties. Levan is a natural biopolymer of fructose which is produced by transfructosylation reaction in the presence of levansucrase. It is one of the industrially promising enzymes that offer a variety of industrial applications in the field of cosmetics, foods and pharmaceuticals. Although levan has significant applications but the yield of levan produced is not equal to other biopolymers due to the inefficiency of producer microorganism. Among wide range of levansucrase producing microorganisms, Zymomonas mobilis is considered as a potential candidate for large scale production of this natural polysaccharide. The present investigation is concerned with the isolation of levansucrase producing natural isolate having maximum enzyme production. Furthermore, production parameters were optimized to get higher enzyme yield. Levansucrase was partially purified and characterized to study its applicability on industrial scale. The results of this study revealed that the bacterial strain Z. mobilis KIBGE-IB14 was the best producer of levansucrase. Bacterial growth and enzyme production was greatly influenced by physical and chemical parameters. Maximum levansucrase production was achieved after 24 hours of fermentation at 30°C using modified medium of pH-6.5. Contrary to other levansucrases, the one presented in the current study is able to produce high amount of products in relatively short period of time with optimum temperature at 35°C. Due to these advantages, this enzyme can be used on large scale for commercial production of levan and other important metabolites.

Keywords: levansucrase, metabolites, polysaccharides, transfructosylation

Procedia PDF Downloads 482
797 Bean in Turkey: Characterization, Inter Gene Pool Hybridization Events, Breeding, Utilizations

Authors: Faheem Shahzad Baloch, Muhammad Azhar Nadeem, Muhammad Amjad Nawaz, Ephrem Habyarimana, Gonul Comertpay, Tolga Karakoy, Rustu Hatipoglu, Mehmet Zahit Yeken, Vahdettin Ciftci

Abstract:

Turkey is considered a bridge between Europe, Asia, and Africa and possibly played an important role in the distribution of many crops including common bean. Hundreds of common bean landraces can be found in Turkey, particularly in farmers’ fields, and they consistently contribute to the overall production. To investigate the existing genetic diversity and hybridization events between the Andean and Mesoamerican gene pools in the Turkish common bean, 188 common bean accessions (182 landraces and 6 modern cultivars as controls) were collected from 19 different Turkish geographic regions. These accessions were characterized using phenotypic data (growth habit and seed weight), geographic provenance, 12557 high-quality whole-genome DArTseq markers, and 3767 novel DArTseq loci were also identified. The clustering algorithms resolved the Turkish common bean landrace germplasm into the two recognized gene pools, the Mesoamerican and Andean gene pools. Hybridization events were observed in both gene pools (14.36% of the accessions) but mostly in the Mesoamerican (7.97% of the accessions), and was low relative to previous European studies. The lower level of hybridization witnessed the existence of Turkish common bean germplasm in its original form as compared to Europe. Mesoamerican gene pool reflected a higher level of diversity, while the Andean gene pool was predominant (56.91% of the accessions), but genetically less diverse and phenotypically more pure, reflecting farmers greater preference for the Andean gene pool. We also found some genetically distinct landraces and overall, a meaningful level of genetic variability which can be used by the scientific community in breeding efforts to develop superior common bean strains.

Keywords: bean germplasm, DArTseq markers, genotyping by sequencing, Turkey, whole genome diversity

Procedia PDF Downloads 228
796 Optimization Method of the Number of Berth at Bus Rapid Transit Stations Based on Passenger Flow Demand

Authors: Wei Kunkun, Cao Wanyang, Xu Yujie, Qiao Yuzhi, Liu Yingning

Abstract:

The reasonable design of bus parking spaces can improve the traffic capacity of the station and reduce traffic congestion. In order to reasonably determine the number of berths at BRT (Bus Rapid Transit) stops, it is based on the actual bus rapid transit station observation data, scheduling data, and passenger flow data. Optimize the number of station berths from the perspective of optimizing the balance of supply and demand at the site. Combined with the classical capacity calculation model, this paper first analyzes the important factors affecting the traffic capacity of BRT stops by using SPSS PRO and MATLAB programming software, namely the distribution of BRT stops and the distribution of BRT stop time. Secondly, the method of calculating the number of the classic human capital management (HCM) model is optimized based on the actual passenger demand of the station, and the method applicable to the actual number of station berths is proposed. Taking Gangding Station of Zhongshan Avenue Bus Rapid Transit Corridor in Guangzhou as an example, based on the calculation method proposed in this paper, the number of berths of sub-station 1, sub-station 2 and sub-station 3 is 2, which reduces the road space of the station by 33.3% compared with the previous berth 3 of each sub-station, and returns to social vehicles. Therefore, under the condition of ensuring the passenger flow demand of BRT stations, the road space of the station is reduced, and the road is returned to social vehicles, the traffic capacity of social vehicles is improved, and the traffic capacity and efficiency of the BRT corridor system are improved as a whole.

Keywords: urban transportation, bus rapid transit station, HCM model, capacity, number of berths

Procedia PDF Downloads 81
795 Optical Flow Technique for Supersonic Jet Measurements

Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi

Abstract:

This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.

Keywords: Schlieren, optical flow, supersonic jets, shock shear layer

Procedia PDF Downloads 296
794 Effects of Nano-Coating on the Mechanical Behavior of Nanoporous Metals

Authors: Yunus Onur Yildiz, Mesut Kirca

Abstract:

In this study, mechanical properties of a nanoporous metal coated with a different metallic material are studied through a new atomistic modelling technique and molecular dynamics (MD) simulations. This new atomistic modelling technique is based on the Voronoi tessellation method for the purpose of geometric representation of the ligaments. With the proposed technique, atomistic models of nanoporous metals which have randomly oriented ligaments with non-uniform mass distribution along the ligament axis can be generated by enabling researchers to control both ligament length and diameter. Furthermore, by the utilization of this technique, atomistic models of coated nanoporous materials can be numerically obtained for further mechanical or thermal characterization. In general, this study consists of two stages. At the first stage, we use algorithms developed for generating atomic coordinates of the coated nanoporous material. In this regard, coordinates of randomly distributed points are determined in a controlled way to be employed in the establishment of the Voronoi tessellation, which results in randomly oriented and intersected line segments. Then, line segment representation of the Voronoi tessellation is transformed to atomic structure by a special process. This special process includes generation of non-uniform volumetric core region in which atoms can be generated based on a specific crystal structure. As an extension, this technique can be used for coating of nanoporous structures by creating another volumetric region encapsulating the core region in which atoms for the coating material are generated. The ultimate goal of the study at this stage is to generate atomic coordinates that can be employed in the MD simulations of randomly organized coated nanoporous structures. At the second stage of the study, mechanical behavior of the coated nanoporous models is investigated by examining deformation mechanisms through MD simulations. In this way, the effect of coating on the mechanical behavior of the selected material couple is investigated.

Keywords: atomistic modelling, molecular dynamic, nanoporous metals, voronoi tessellation

Procedia PDF Downloads 265
793 Application of Response Surface Methodology to Optimize the Factor Influencing the Wax Deposition of Malaysian Crude Oil

Authors: Basem Elarbe, Ibrahim Elganidi, Norida Ridzuan, Norhyati Abdullah

Abstract:

Wax deposition in production pipelines and transportation tubing from offshore to onshore is critical in the oil and gas industry due to low-temperature conditions. It may lead to a reduction in production, shut-in, plugging of pipelines and increased fluid viscosity. The most significant popular approach to solve this issue is by injection of a wax inhibitor into the channel. This research aims to determine the amount of wax deposition of Malaysian crude oil by estimating the effective parameters using (Design-Expert version 7.1.6) by response surface methodology (RSM) method. Important parameters affecting wax deposition such as cold finger temperature, inhibitor concentration and experimental duration were investigated. It can be concluded that SA-co-BA copolymer had a higher capability of reducing wax in different conditions where the minimum point of wax reduction was found at 300 rpm, 14℃, 1h, 1200 ppmThe amount of waxes collected for each parameter were 0.12g. RSM approach was applied using rotatable central composite design (CCD) to minimize the wax deposit amount. The regression model’s variance (ANOVA) results revealed that the R2 value of 0.9906, indicating that the model can be clarified 99.06% of the data variation, and just 0.94% of the total variation were not clarified by the model. Therefore, it indicated that the model is extremely significant, confirming a close agreement between the experimental and the predicted values. In addition, the result has shown that the amount of wax deposit decreased significantly with the increase of temperature and the concentration of poly (stearyl acrylate-co-behenyl acrylate) (SABA), which were set at 14°C and 1200 ppm, respectively. The amount of wax deposit was successfully reduced to the minimum value of 0.01 g after the optimization.

Keywords: wax deposition, SABA inhibitor, RSM, operation factors

Procedia PDF Downloads 263
792 Optimization, Characterization and Stability of Trachyspermum copticum Essential Oil Loaded in Niosome Nanocarriers

Authors: Mohadese Hashemi, Elham Akhoundi Kharanaghi, Fatemeh Haghiralsadat, Mojgan Yazdani, Omid Javani, Mahboobe Sharafodini, Davood Rajabi

Abstract:

Niosomes are non-ionic surfactant vesicles in aqueous media resulting in closed bilayer structures that can be used as carriers of hydrophilic and hydrophobic compounds. The use of niosomes for encapsulation of essential oils (EOs) is an attractive new approach to overcome their physicochemical stability concerns include sensibility to oxygen, light, temperature, and volatility, and their reduced bioavailability which is due to low solubility in water. EOs are unstable and fragile volatile compounds which have strong interest in pharmaceutical due to their medicinal properties such as antiviral, anti-inflammatory, antifungal, and antioxidant activities without side effects. Trachyspermum copticum (ajwain) is an annual aromatic plant with important medicinal properties that grows widely around Mediterranean region and south-west Asian countries. The major components of the ajwain oil were reported as thymol, γ-terpinene, p-cymene, and carvacrol which provide antimicrobial and antioxidant activity. The aim of this work was to formulate ajwain essential oil-loaded niosomes to improve water solubility of natural product and evaluate its physico-chemical features and stability. Ajwain oil was obtained through steam distillation using a clevenger-type apparatus and GC/MS was applied to identify the main components of the essential oil. Niosomes were prepared by using thin film hydration method and nanoparticles were characterized for particle size, dispersity index, zeta potential, encapsulation efficiency, in vitro release, and morphology.

Keywords: trachyspermum copticum, ajwain, niosome, essential oil, encapsulation

Procedia PDF Downloads 462
791 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory

Authors: Kiana Zeighami, Morteza Ozlati Moghadam

Abstract:

Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.

Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping

Procedia PDF Downloads 191
790 Evaluation of Microbial Community, Biochemical and Physiological Properties of Korean Black Raspberry (Rubus coreanus Miquel) Vinegar Manufacturing Process

Authors: Nho-Eul Song, Sang-Ho Baik

Abstract:

Fermentation characteristics of black raspberry vinegar by using static cultures without any additives were has been investigated to establish of vinegar manufacturing conditions and improve the quality of vinegar by optimization the vinegar manufacturing process. The two vinegar manufacturing conditions were prepared; one-step fermentation condition only using mother vinegar that prepared naturally occurring black raspberry vinegar without starter yeast for alcohol fermentation (traditional method) and two-step fermentation condition using commercial wine yeast and mother vinegar for acetic acid fermentation. Approximately 12% ethanol was produced after 35 days fermentation with log 7.6 CFU/mL of yeast population in one-step fermentation, resulting sugar reduction from 14 to 6oBrix whereas in two-step fermentation, ethanol concentration was reached up to 8% after 27 days with continuous increasing yeast until log 7.0 CFU/mL. In addition, yeast and ethanol were decreased after day 60 accompanied with proliferation of acetic acid bacteria (log 5.8 CFU/mL) and titratable acidity; 4.4% in traditional method and 6% in two-step fermentation method. DGGE analysis showed that S. cerevisiae was detected until 77 days of traditional fermentation and gradually changed to AAB, Acetobacter pasteurianus, as dominant species and Komagataeibacter xylinus at the end of the fermentation. However, S. cerevisiae and A. pasteurianus was dominant in two-step fermentation process. The prepared two-step fermentation showed enhanced total polyphenol and flavonoid content significantly resulting in higher radical scavenging activity. Our studies firstly revealed the microbial community change with chemical change and demonstrated a suitable fermentation system for black raspberry vinegar by the static surface method.

Keywords: bacteria, black raspberry, vinegar fermentation, yeast

Procedia PDF Downloads 429
789 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction

Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan

Abstract:

Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.

Keywords: decision trees, neural network, myocardial infarction, Data Mining

Procedia PDF Downloads 407
788 Optimization of a Bioremediation Strategy for an Urban Stream of Matanza-Riachuelo Basin

Authors: María D. Groppa, Andrea Trentini, Myriam Zawoznik, Roxana Bigi, Carlos Nadra, Patricia L. Marconi

Abstract:

In the present work, a remediation bioprocess based on the use of a local isolate of the microalgae Chlorella vulgaris immobilized in alginate beads is proposed. This process was shown to be effective for the reduction of several chemical and microbial contaminants present in Cildáñez stream, a water course that is part of the Matanza-Riachuelo Basin (Buenos Aires, Argentina). The bioprocess, involving the culture of the microalga in autotrophic conditions in a stirred-tank bioreactor supplied with a marine propeller for 6 days, allowed a significant reduction of Escherichia coli and total coliform numbers (over 95%), as well as of ammoniacal nitrogen (96%), nitrates (86%), nitrites (98%), and total phosphorus (53%) contents. Pb content was also significantly diminished after the bioprocess (95%). Standardized cytotoxicity tests using Allium cepa seeds and Cildáñez water pre- and post-remediation were also performed. Germination rate and mitotic index of onion seeds imbibed in Cildáñez water subjected to the bioprocess was similar to that observed in seeds imbibed in distilled water and significantly superior to that registered when untreated Cildáñez water was used for imbibition. Our results demonstrate the potential of this simple and cost-effective technology to remove urban-water contaminants, offering as an additional advantage the possibility of an easy biomass recovery, which may become a source of alternative energy.

Keywords: bioreactor, bioremediation, Chlorella vulgaris, Matanza-Riachuelo Basin, microalgae

Procedia PDF Downloads 221
787 Teaching–Learning-Based Optimization: An Efficient Method for Chinese as a Second Language

Authors: Qi Wang

Abstract:

In the classroom, teachers have been trained to complete the target task within the limited lecture time, meanwhile learners need to receive a lot of new knowledge, however, most of the time the learners come without the proper pre-class preparation to efficiently take in the contents taught in class. Under this circumstance, teachers do have no time to check whether the learners fully understand the content or not, how the learners communicate in the different contexts, until teachers see the results when the learners are tested. In the past decade, the teaching of Chinese has taken a trend. Teaching focuses less on the use of proper grammatical terms/punctuation and is now placing a heavier focus on the materials from real life contexts. As a result, it has become a greater challenge to teachers, as this requires teachers to fully understand/prepare what they teach and explain the content with simple and understandable words to learners. On the other hand, the same challenge also applies to the learners, who come from different countries. As they have to use what they learnt, based on their personal understanding of the material to effectively communicate with others in the classroom, even in the contexts of a day to day communication. To reach this win-win stage, Feynman’s Technique plays a very important role. This practical report presents you how the Feynman’s Technique is applied into Chinese courses, both writing & oral, to motivate the learners to practice more on writing, reading and speaking in the past few years. Part 1, analysis of different teaching styles and different types of learners, to find the most efficient way to both teachers and learners. Part 2, based on the theory of Feynman’s Technique, how to let learners build the knowledge from knowing the name of something to knowing something, via different designed target tasks. Part 3. The outcomes show that Feynman’s Technique is the interaction of learning style and teaching style, the double-edged sword of Teaching & Learning Chinese as a Second Language.

Keywords: Chinese, Feynman’s technique, learners, teachers

Procedia PDF Downloads 133
786 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees

Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.

Keywords: cloud storage, decision trees, diagnostic image, search, telemedicine

Procedia PDF Downloads 185
785 Synthesis and Characterization of Cassava Starch-Zinc Nanocomposite Film for Food Packaging Application

Authors: Adeshina Fadeyibi

Abstract:

Application of pure thermoplastic film in food packaging is greatly limited because of its poor service performance, often enhanced by the addition of organic or inorganic particles in the range of 1–100 nm. Thus, this study was conducted to develop cassava starch zinc-nanocomposite films for applications in food packaging. Three blending ratios of 1000 g cassava starch, 45–55 % (w/w) glycerol and 0–2 % (w/w) zinc nanoparticles were formulated, mixed and mechanically homogenized to form the nanocomposite. Thermoplastic were prepared, from a dispersed mixture of 24 g of the nanocomposite and 600 ml of distilled water, and heated to 90oC for 30 minutes. Plastic molds of 350 ×180 mm dimension and 8, 10 and 12 mm depths were used for film casting and drying at 60oC and 80 % RH for 24 hour. The average thicknesses of the dried films were found to be 15, 16 and 17 µm. The films were characterized based on their barrier, thermal, mechanical and structural properties. The results show that the oxygen and water vapor barrier properties increased with glycerol concentration and decreased with thickness; but the full width at half maximum (FWHM) and d- spacing increased with thickness. The higher degree of d- spacing obtained is a consequence of higher polymer intercalation and exfoliation. Also, only 2 % weight degradation was observed when the films were exposed to temperature between 30–60oC; indicating that they are thermally stable and can be used for packaging applications in the tropics. The mechanical properties of the film were higher than that of the pure thermoplastic but comparable with the LDPE films. The information on the characterized attributes and optimization of the cassava starch zinc-nanocomposite films justifies their alternative application to pure thermoplastic and conventional films for food packaging.

Keywords: synthesis, characterization, casaava Starch, nanocomposite film, packaging

Procedia PDF Downloads 94
784 Thermolysin Entrapment in a Gold Nanoparticles/Polymer Composite: Construction of an Efficient Biosensor for Ochratoxin a Detection

Authors: Fatma Dridi, Mouna Marrakchi, Mohammed Gargouri, Alvaro Garcia Cruz, Sergei V. Dzyadevych, Francis Vocanson, Joëlle Saulnier, Nicole Jaffrezic-Renault, Florence Lagarde

Abstract:

An original method has been successfully developed for the immobilization of thermolysin onto gold interdigitated electrodes for the detection of ochratoxin A (OTA) in olive oil samples. A mix of polyvinyl alcohol (PVA), polyethylenimine (PEI) and gold nanoparticles (AuNPs) was used. Cross-linking sensors chip was made by using a saturated glutaraldehyde (GA) vapor atmosphere in order to render the two polymers water stable. Performance of AuNPs/ (PVA/PEI) modified electrode was compared to a traditional immobilized enzymatic method using bovine serum albumin (BSA). Atomic force microscopy (AFM) experiments were employed to provide a useful insight into the structure and morphology of the immobilized thermolysin composite membranes. The enzyme immobilization method influence the topography and the texture of the deposited layer. Biosensors optimization and analytical characteristics properties were studied. Under optimal conditions AuNPs/ (PVA/PEI) modified electrode showed a higher increment in sensitivity. A 700 enhancement factor could be achieved with a detection limit of 1 nM. The newly designed OTA biosensors showed a long-term stability and good reproducibility. The relevance of the method was evaluated using commercial doped olive oil samples. No pretreatment of the sample was needed for testing and no matrix effect was observed. Recovery values were close to 100% demonstrating the suitability of the proposed method for OTA screening in olive oil.

Keywords: thermolysin, A. ochratoxin , polyvinyl alcohol, polyethylenimine, gold nanoparticles, olive oil

Procedia PDF Downloads 566