Search results for: active toolkit
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 967

Search results for: active toolkit

127 Silver Modified TiO2/Halloysite Thin Films for Decontamination of Target Pollutants

Authors: Dionisios Panagiotaras, Elias Stathatos, Dimitrios Papoulis

Abstract:

 Sol-gel method has been used to fabricate nanocomposite films on glass substrates composed halloysite clay mineral and nanocrystalline TiO2. The methodology for the synthesis involves a simple chemistry method utilized nonionic surfactant molecule as pore directing agent along with the acetic acid-based solgel route with the absence of water molecules. The thermal treatment of composite films at 450oC ensures elimination of organic material and lead to the formation of TiO2 nanoparticles onto the surface of the halloysite nanotubes. Microscopy techniques and porosimetry methods used in order to delineate the structural characteristics of the materials. The nanocomposite films produced have no cracks and active anatase crystal phase with small crystallite size were deposited on halloysite nanotubes. The photocatalytic properties for the new materials were examined for the decomposition of the Basic Blue 41 azo dye in solution. These, nanotechnology based composite films show high efficiency for dye’s discoloration in spite of different halloysite quantities and small amount of halloysite/TiO2 catalyst immobilized onto glass substrates. Moreover, we examined the modification of the halloysite/TiO2 films with silver particles in order to improve the photocatalytic properties of the films. Indeed, the presence of silver nanoparticles enhances the discoloration rate of the Basic Blue 41 compared to the efficiencies obtained for unmodified films.

Keywords: Clay mineral, nanotubular Halloysite, Photocatalysis, Titanium Dioxide, Silver modification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2530
126 Predictor Factors for Treatment Failure among Patients on Second Line Antiretroviral Therapy

Authors: Mohd. A. M. Rahim, Yahaya Hassan, Mathumalar L. Fahrni

Abstract:

Second line antiretroviral therapy (ART) regimen is used when patients fail their first line regimen. There are many factors such as non-adherence, drug resistance as well as virological and immunological failure that lead to second line highly active antiretroviral therapy (HAART) regimen treatment failure. This study was aimed at determining predictor factors to treatment failure with second line HAART and analyzing median survival time. An observational, retrospective study was conducted in Sungai Buloh Hospital (HSB) to assess current status of HIV patients treated with second line HAART regimen. Convenience sampling was used and 104 patients were included based on the study’s inclusion and exclusion criteria. Data was collected for six months i.e. from July until December 2013. Data was then analysed using SPSS version 18. Kaplan-Meier and Cox regression analyses were used to measure median survival times and predictor factors for treatment failure. The study population consisted mainly of male subjects, aged 30- 45 years, who were heterosexual, and had HIV infection for less than 6 years. The most common second line HAART regimen given was lopinavir/ritonavir (LPV/r)-based combination. Kaplan-Meier analysis showed that patients on LPV/r demonstrated longer median survival times than patients on indinavir/ritonavir (IDV/r) based combination (p<0.001). The commonest reason for a treatment to fail with second line HAART was non-adherence. Based on Cox regression analysis, other predictor factors for treatment failure with second line HAART regimen were age and mode of HIV transmission.

Keywords: Adherence, antiretroviral therapy, second line, treatment failure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2717
125 Undergraduates Learning Preferences: A Comparison of Science, Technology and Social Science Academic Disciplines in Relations to Teaching Designs and Strategies

Authors: Salina Budin, Shaira Ismail

Abstract:

Students learn effectively in a learning environment with a suitable teaching approach that matches their learning preferences. The main objective of the study is to examine the learning preferences amongst the students in the Science and Technology (S&T), and Social Science (SS) fields of study at the Universiti Teknologi Mara (UiTM), Pulau Pinang. The measurement instrument is based on the Dunn and Dunn Learning Styles which measure five elements of learning styles; environmental, sociological, emotional, physiological and psychological. Questionnaires are distributed amongst undergraduates in the Faculty of Mechanical Engineering and Faculty of Business Management. The respondents comprise of 131 diploma students of the Faculty of Mechanical Engineering and 111 degree students of the Faculty of Business Management. The results indicate that, both S&T and SS students share a similar learning preferences on the environmental aspect, emotional preferences, motivational level, learning responsibility, persistent level in learning and learning structure. Most of the S&T students are concluded as analytical learners and the majority of SS students are global learners. Both S&T and SS students are concluded as visual learners, preferred to be in an active mobility in a relaxing and enjoying mode with some light of refreshments during the learning process and exhibited reflective characteristics in learning. Obviously, the S&T students are considered as left brain dominant, whereas the SS students are right brain dominant. The findings highlighted that both categories of students exhibited similar learning preferences except on psychological preferences.

Keywords: Learning preferences, Dunn and Dunn learning style, teaching approach, science and technology, social science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388
124 Effect of Natural Fibres Inclusion in Clay Bricks: Physico-Mechanical Properties

Authors: Chee-Ming Chan

Abstract:

In spite of the advent of new materials, clay bricks remain, arguably, the most popular construction materials today. Nevertheless the low cost and versatility of clay bricks cannot always be associated with high environmental and sustainable values, especially in terms of raw material sources and manufacturing processes. At the same time, the worldwide agricultural footprint is fast growing, with vast agricultural land cultivation and active expansion of the agro-based industry. The resulting large quantities of agricultural wastes, unfortunately, are not always well managed or utilised. These wastes can be recycled, such as by retrieving fibres from disposed leaves and fruit bunches, and then incorporated in brick-making. This way the clay bricks are made a 'greener' building material and the discarded natural wastes can be reutilised, avoiding otherwise wasteful landfill and harmful open incineration. This study examined the physical and mechanical properties of clay bricks made by adding two natural fibres to a clay-water mixture, with baked and non-baked conditions. The fibres were sourced from pineapple leaves (PF) and oil palm fruit bunch (OF), and added within the range of 0.25-0.75 %. Cement was added as a binder to the mixture at 5-15 %. Although the two fibres had different effects on the bricks produced, cement appeared to dominate the compressive strength. The non-baked bricks disintegrated when submerged in water, while the baked ones displayed cement-dependent characteristics in water-absorption and density changes. Interestingly, further increase in fibre content did not cause significant density decrease in both the baked and non-baked bricks.

Keywords: natural fibres, clay bricks, strength, water absorption, density.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4660
123 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: Computer vision, deep learning, object detection, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
122 Rotation Invariant Fusion of Partial Image Parts in Vista Creation using Missing View Regeneration

Authors: H. B. Kekre, Sudeep D. Thepade

Abstract:

The automatic construction of large, high-resolution image vistas (mosaics) is an active area of research in the fields of photogrammetry [1,2], computer vision [1,4], medical image processing [4], computer graphics [3] and biometrics [8]. Image stitching is one of the possible options to get image mosaics. Vista Creation in image processing is used to construct an image with a large field of view than that could be obtained with a single photograph. It refers to transforming and stitching multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas. Vista creation process aligns two partial images over each other and blends them together. Image mosaics allow one to compensate for differences in viewing geometry. Thus they can be used to simplify tasks by simulating the condition in which the scene is viewed from a fixed position with single camera. While obtaining partial images the geometric anomalies like rotation, scaling are bound to happen. To nullify effect of rotation of partial images on process of vista creation, we are proposing rotation invariant vista creation algorithm in this paper. Rotation of partial image parts in the proposed method of vista creation may introduce some missing region in the vista. To correct this error, that is to fill the missing region further we have used image inpainting method on the created vista. This missing view regeneration method also overcomes the problem of missing view [31] in vista due to cropping, irregular boundaries of partial image parts and errors in digitization [35]. The method of missing view regeneration generates the missing view of vista using the information present in vista itself.

Keywords: Vista, Overlap Estimation, Rotation Invariance, Missing View Regeneration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723
121 A Control Strategy Based on UTT and ISCT for 3P4W UPQC

Authors: Yash Pal, A.Swarup, Bhim Singh

Abstract:

This paper presents a novel control strategy of a threephase four-wire Unified Power Quality (UPQC) for an improvement in power quality. The UPQC is realized by integration of series and shunt active power filters (APFs) sharing a common dc bus capacitor. The shunt APF is realized using a thee-phase, four leg voltage source inverter (VSI) and the series APF is realized using a three-phase, three leg VSI. A control technique based on unit vector template technique (UTT) is used to get the reference signals for series APF, while instantaneous sequence component theory (ISCT) is used for the control of Shunt APF. The performance of the implemented control algorithm is evaluated in terms of power-factor correction, load balancing, neutral source current mitigation and mitigation of voltage and current harmonics, voltage sag and swell in a three-phase four-wire distribution system for different combination of linear and non-linear loads. In this proposed control scheme of UPQC, the current/voltage control is applied over the fundamental supply currents/voltages instead of fast changing APFs currents/voltages, there by reducing the computational delay and the required sensors. MATLAB/Simulink based simulations are obtained, which support the functionality of the UPQC. MATLAB/Simulink based simulations are obtained, which support the functionality of the UPQC.

Keywords: Power Quality, UPQC, Harmonics, Load Balancing, Power Factor Correction, voltage harmonic mitigation, currentharmonic mitigation, voltage sag, swell

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2270
120 The Effect of Magnetite Particle Size on Methane Production by Fresh and Degassed Anaerobic Sludge

Authors: E. Al-Essa, R. Bello-Mendoza, D. G. Wareham

Abstract:

Anaerobic batch experiments were conducted to investigate the effect of magnetite-supplementation (7 mM) on methane production from digested sludge undergoing two different microbial growth phases, namely fresh sludge (exponential growth phase) and degassed sludge (endogenous decay phase). Three different particle sizes were assessed: small (50 - 150 nm), medium (168 – 490 nm) and large (800 nm - 4.5 µm) particles. Results show that, in the case of the fresh sludge, magnetite significantly enhanced the methane production rate (up to 32%) and reduced the lag phase (by 15% - 41%) as compared to the control, regardless of the particle size used. However, the cumulative methane produced at the end of the incubation was comparable in all treatment and control bottles. In the case of the degassed sludge, only the medium-sized magnetite particles increased significantly the methane production rate (12% higher) as compared to the control. Small and large particles had little effect on the methane production rate but did result in an extended lag phase which led to significantly lower cumulative methane production at the end of the incubation period. These results suggest that magnetite produces a clear and positive effect on methane production only when an active and balanced microbial community is present in the anaerobic digester. It is concluded that, (i) the effect of magnetite particle size on increasing the methane production rate and reducing lag phase duration is strongly influenced by the initial metabolic state of the microbial consortium, and (ii) the particle size would positively affect the methane production if it is provided within the nanometer size range.

Keywords: Anaerobic digestion, iron oxide (Fe3O4), methanogenesis, nanoparticle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 789
119 A Remote Sensing Approach for Vulnerability and Environmental Change in Apodi Valley Region, Northeast Brazil

Authors: Mukesh Singh Boori, Venerando Eustáquio Amaro

Abstract:

The objective of this study was to improve our understanding of vulnerability and environmental change; it's causes basically show the intensity, its distribution and human-environment effect on the ecosystem in the Apodi Valley Region, This paper is identify, assess and classify vulnerability and environmental change in the Apodi valley region using a combined approach of landscape pattern and ecosystem sensitivity. Models were developed using the following five thematic layers: Geology, geomorphology, soil, vegetation and land use/cover, by means of a Geographical Information Systems (GIS)-based on hydro-geophysical parameters. In spite of the data problems and shortcomings, using ESRI-s ArcGIS 9.3 program, the vulnerability score, to classify, weight and combine a number of 15 separate land cover classes to create a single indicator provides a reliable measure of differences (6 classes) among regions and communities that are exposed to similar ranges of hazards. Indeed, the ongoing and active development of vulnerability concepts and methods have already produced some tools to help overcome common issues, such as acting in a context of high uncertainties, taking into account the dynamics and spatial scale of asocial-ecological system, or gathering viewpoints from different sciences to combine human and impact-based approaches. Based on this assessment, this paper proposes concrete perspectives and possibilities to benefit from existing commonalities in the construction and application of assessment tools.

Keywords: Vulnerability, Land use/cover, Ecosystem, Remotesensing, GIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2946
118 Using Data Mining in Automotive Safety

Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler

Abstract:

Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.

Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2844
117 Distributed Cost-Based Scheduling in Cloud Computing Environment

Authors: Rupali, Anil Kumar Jaiswal

Abstract:

Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc.  Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively.  Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.

Keywords: Physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 850
116 Enhance Indoor Environment in Buildings and Its Effect on Improving Occupant's Health

Authors: Imad M. Assali

Abstract:

Recently, the world main problem is a global warming and climate change affecting both outdoor and indoor environments, especially the air quality (AQ) as a result of vast migration of people from rural areas to urban areas. Therefore, cities became more crowded and denser from an irregular population increase, along with increasing urbanization caused many problems for the environment such as increasing the land prices, changes in life style, and the new buildings are not adapted to the climate producing uncomfortable and unhealthy indoor building conditions. As interior environments are the places that create the most intimate relationship with the user. Consequently, the indoor environment quality (IEQ) for buildings became uncomfortable and unhealthy for its occupants. The symptoms commonly associated with poor indoor environment such as itchy, headache, fatigue, and respiratory complaints such as cough and congestion, etc. The symptoms tend to improve over time or even disappear when people are away from the building. Therefore, designing a healthy indoor environment to fulfill human needs is the main concern for architects and interior designer. However, this research explores how occupant expectations and environmental attitudes may influence occupant health and satisfaction within the context of the indoor environment. In doing so, it reviews and contributes to the methods and tools used to evaluate only the indoor environment quality (IEQ) components of building performance. Its main aim is to review the literature on indoor human comfort. This is followed by a review of previous papers published related to human comfort. Finally, this paper will provide possible approaches in design level of healthy buildings.

Keywords: Sustainable building, indoor environment quality (IEQ), occupant's health, active system, sick building syndrome (SBS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903
115 Interest of the Sequences Pseudo Noises Codes of Different Lengths for the Reduction from the Interference between Users of CDMA Network

Authors: Nerguè Kassahan Kone, Souleymane Oumtanaga

Abstract:

The third generation (3G) of cellular system adopted the spread spectrum as solution for the transmission of the data in the physical layer. Contrary to systems IS-95 or CDMAOne (systems with spread spectrum of the preceding generation), the new standard, called Universal Mobil Telecommunications System (UMTS), uses long codes in the down link. The system is conceived for the vocal communication and the transmission of the data. In particular, the down link is very important, because of the asymmetrical request of the data, i.e., more remote loading towards the mobiles than towards the basic station. Moreover, the UMTS uses for the down link an orthogonal spreading out with a variable factor of spreading out (OVSF for Orthogonal Variable Spreading Factor). This characteristic makes it possible to increase the flow of data of one or more users by reducing their factor of spreading out without changing the factor of spreading out of other users. In the current standard of the UMTS, two techniques to increase the performances of the down link were proposed, the diversity of sending antenna and the codes space-time. These two techniques fight only fainding. The receiver proposed for the mobil station is the RAKE, but one can imagine a receiver more sophisticated, able to reduce the interference between users and the impact of the coloured noise and interferences to narrow band. In this context, where the users have long codes synchronized with variable factor of spreading out and ignorance by the mobile of the other active codes/users, the use of the sequences of code pseudo-noises different lengths is presented in the form of one of the most appropriate solutions.

Keywords: DS-CDMA, multiple access interference, ratio Signal / interference + Noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1351
114 Relevant Stakeholders in Environmental Management Organization: The Case of Industries Três Rios/RJ

Authors: Beatriz dos Anjos Furtado, Marina Barreiros Lamim, Camila Avozani Zago, Julianne Alvim Milward-de-Azevedo, Luís Cláudio Meirelles de Medeiros

Abstract:

The intense process of economic acceleration, expansion of industrial activities and capitalism, combined with population growth, while promoting the development, bring environmental consequences and dynamics of locations. It can be seen that society is seeking to break with old paradigms of capitalist society, seeking to reconcile growth with sustainable development, with a change of mentality of the stakeholders of the production process (shareholders, employees, suppliers, customers, governments, and neighbors, groups citizens and the public in general). In this context, this research aims to map the stakeholders interested in environmental management in industries located in the city of Três Rios/RJ. The city of Três Rios is located in South-Central region of the state of Rio de Janeiro - Brazil. Methodological resources used refer to descriptive and field research, whose nature is qualitative and quantitative. It is also of multicases studies in the study area, and the data collection occurred by means of semi-structured questionnaires and interviews with employees related to the environmental area of the industries located in Três Rios and registered at the Federation of Industries the State of Rio de Janeiro - FIRJAN in the version of 2013 and active in federal revenue. Through this research it observed, among other things, the stakeholders involved in the environmental management process of “Três Rios” industry respondents, and those responding to the demands of environmental management.

Keywords: Environmental management, environmental practices, industry, stakeholders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
113 An Efficient Tool for Mitigating Voltage Unbalance with Reactive Power Control of Distributed Grid-Connected Photovoltaic Systems

Authors: Malinwo Estone Ayikpa

Abstract:

With the rapid increase of grid-connected PV systems over the last decades, genuine challenges have arisen for engineers and professionals of energy field in the planning and operation of existing distribution networks with the integration of new generation sources. However, the conventional distribution network, in its design was not expected to receive other generation outside the main power supply. The tools generally used to analyze the networks become inefficient and cannot take into account all the constraints related to the operation of grid-connected PV systems. Some of these constraints are voltage control difficulty, reverse power flow, and especially voltage unbalance which could be due to the poor distribution of single-phase PV systems in the network. In order to analyze the impact of the connection of small and large number of PV systems to the distribution networks, this paper presents an efficient optimization tool that minimizes voltage unbalance in three-phase distribution networks with active and reactive power injections from the allocation of single-phase and three-phase PV plants. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. Good reduction of voltage unbalance can be achieved by reactive power control of the PV systems. The presented tool is based on the three-phase current injection method and the PV systems are modeled via an equivalent circuit. The primal-dual interior point method is used to obtain the optimal operating points for the systems.

Keywords: Photovoltaic generation, primal-dual interior point method, three-phase optimal power flow, unbalanced system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088
112 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
111 Bio-Surfactant Production and Its Application in Microbial EOR

Authors: A. Rajesh Kanna, G. Suresh Kumar, Sathyanaryana N. Gummadi

Abstract:

There are various sources of energies available worldwide and among them, crude oil plays a vital role. Oil recovery is achieved using conventional primary and secondary recovery methods. In-order to recover the remaining residual oil, technologies like Enhanced Oil Recovery (EOR) are utilized which is also known as tertiary recovery. Among EOR, Microbial enhanced oil recovery (MEOR) is a technique which enables the improvement of oil recovery by injection of bio-surfactant produced by microorganisms. Bio-surfactant can retrieve unrecoverable oil from the cap rock which is held by high capillary force. Bio-surfactant is a surface active agent which can reduce the interfacial tension and reduce viscosity of oil and thereby oil can be recovered to the surface as the mobility of the oil is increased. Research in this area has shown promising results besides the method is echo-friendly and cost effective compared with other EOR techniques. In our research, on laboratory scale we produced bio-surfactant using the strain Pseudomonas putida (MTCC 2467) and injected into designed simple sand packed column which resembles actual petroleum reservoir. The experiment was conducted in order to determine the efficiency of produced bio-surfactant in oil recovery. The column was made of plastic material with 10 cm in length. The diameter was 2.5 cm. The column was packed with fine sand material. Sand was saturated with brine initially followed by oil saturation. Water flooding followed by bio-surfactant injection was done to determine the amount of oil recovered. Further, the injection of bio-surfactant volume was varied and checked how effectively oil recovery can be achieved. A comparative study was also done by injecting Triton X 100 which is one of the chemical surfactant. Since, bio-surfactant reduced surface and interfacial tension oil can be easily recovered from the porous sand packed column.

Keywords: Bio-surfactant, Bacteria, Interfacial tension, Sand column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2777
110 A β-mannanase from Fusarium oxysporum SS-25 via Solid State Fermentation on Brewer’s Spent Grain: Medium Optimization by Statistical Tools, Kinetic Characterization and Its Applications

Authors: S. S. Rana, C. Janveja, S. K. Soni

Abstract:

This study is concerned with the optimization of fermentation parameters for the hyper production of mannanase from Fusarium oxysporum SS-25 employing two step statistical strategy and kinetic characterization of crude enzyme preparation. The Plackett-Burman design used to screen out the important factors in the culture medium revealed 20% (w/w) wheat bran, 2% (w/w) each of potato peels, soyabean meal and malt extract, 1% tryptone, 0.14% NH4SO4, 0.2% KH2PO4, 0.0002% ZnSO4, 0.0005% FeSO4, 0.01% MnSO4, 0.012% SDS, 0.03% NH4Cl, 0.1% NaNO3 in brewer’s spent grain based medium with 50% moisture content, inoculated with 2.8×107 spores and incubated at 30oC for 6 days to be the main parameters influencing the enzyme production. Of these factors, four variables including soyabean meal, FeSO4, MnSO4 and NaNO3 were chosen to study the interactive effects and their optimum levels in central composite design of response surface methodology with the final mannanase yield of 193 IU/gds. The kinetic characterization revealed the crude enzyme to be active over broader temperature and pH range. This could result in 26.6% reduction in kappa number with 4.93% higher tear index and 1% increase in brightness when used to treat the wheat straw based kraft pulp. The hydrolytic potential of enzyme was also demonstrated on both locust bean gum and guar gum.

Keywords: Brewer’s Spent Grain, Fusarium oxysporum, Mannanase, Response Surface Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5174
109 Application of Voltage Stability Indices for Proper Placement of STATCOM under Load Increase Scenario

Authors: A. S. Telang, P. P. Bedekar

Abstract:

In today’s world, electrical energy has become an indispensable component of all aspects of modern human life. Reliability, security and stability are the key aspects of any power system. Failure to meet any of these three aspects results into a great impediment to modern life. Modern power systems are being subjected to heavily stressed conditions leading to voltage stability problems. If the voltage stability problems are not mitigated properly through proper voltage stability assessment methods, cascading events may occur which may lead to voltage collapse or blackout events. Modern FACTS devices like STATCOM are one of the measures to overcome the blackout problems. As these devices are very costly, they must be installed properly at suitable locations, mostly at weak bus. Line voltage stability indices such as FVSI, Lmn and LQP play important role for identification of a weak bus. This paper presents evaluation of these line stability indices for the assessment of reliable information about the closeness of the power system to voltage collapse. PSAT is a user-friendly MATLAB toolbox, of which CPF is an important feature which has been extensively used for the placement of STATCOM to assess the stability. Novelty of the present research work lies in that the active and reactive load has been changed simultaneously at all the load buses under consideration. MATLAB code has been developed for the same and tested successfully on various standard IEEE test systems. The results for standard IEEE14 bus test system, specifically, are presented in this paper.

Keywords: Voltage stability analysis, voltage collapse, PSAT, CPF, VSI, FVSI, Lmn, LQP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783
108 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework

Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin

Abstract:

During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.

Keywords: Artificial intelligence, depression detection, facial emotion recognition, natural language processing, mental disorder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1178
107 Detection of Transgenes in Cotton (Gossypium hirsutum L.) by Using Biotechnology/Molecular Biological Techniques

Authors: Ahmad Ali Shahid, Muhammad Shakil Shaukat, Kamran Shehzad Bajwa, Abdul Qayyum Rao, Tayyab Husnain

Abstract:

Agriculture is the backbone of economy of Pakistan and cotton is the major agricultural export and supreme source of raw fiber for our textile industry. To combat severe problems of insect and weed, combination of three genes namely Cry1Ac, Cry2A and EPSPS genes was transferred in locally cultivated cotton variety MNH-786 with the use of Agrobacterium mediated genetic transformation. The present study focused on the molecular screening of transgenic cotton plants at T3 generation in order to confirm integration and expression of all three genes (Cry1Ac, Cry2A and EPSP synthase) into the cotton genome. Initially, glyphosate spray assay was used for screening of transgenic cotton plants containing EPSP synthase gene at T3 generation. Transgenic cotton plants which were healthy and showed no damage on leaves were selected after 07 days of spray. For molecular analysis of transgenic cotton plants in the laboratory, the genomic DNA of these transgenic cotton plants were isolated and subjected to amplification of the three genes. Thus, seventeen out of twenty (Cry1Ac gene), ten out of twenty (Cry2A gene) and all twenty (EPSP synthase gene) were produced positive amplification. On the base of PCR amplification, ten transgenic plant samples were subjected to protein expression analysis through ELISA. The results showed that eight out of ten plants were actively expressing the three transgenes. Real-time PCR was also done to quantify the mRNA expression levels of Cry1Ac and EPSP synthase gene. Finally, eight plants were confirmed for the presence and active expression of all three genes at T3 generation.

Keywords: Agriculture, Cotton, Transformation, Cry Genes, ELISA and PCR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3138
106 A Case Study on Appearance Based Feature Extraction Techniques and Their Susceptibility to Image Degradations for the Task of Face Recognition

Authors: Vitomir Struc, Nikola Pavesic

Abstract:

Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.

Keywords: Biometrics, face recognition, appearance based methods, image degradations, the XM2VTS database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2284
105 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences

Authors: C. Xavier Mendieta, J. J McArthur

Abstract:

Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.

Keywords: Building archetypes, data analysis, energy benchmarks, GHG emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1024
104 Control of Vibrations in Flexible Smart Structures using Fast Output Sampling Feedback Technique

Authors: T.C. Manjunath, B. Bandyopadhyay

Abstract:

This paper features the modeling and design of a Fast Output Sampling (FOS) Feedback control technique for the Active Vibration Control (AVC) of a smart flexible aluminium cantilever beam for a Single Input Single Output (SISO) case. Controllers are designed for the beam by bonding patches of piezoelectric layer as sensor / actuator to the master structure at different locations along the length of the beam by retaining the first 2 dominant vibratory modes. The entire structure is modeled in state space form using the concept of piezoelectric theory, Euler-Bernoulli beam theory, Finite Element Method (FEM) and the state space techniques by dividing the structure into 3, 4, 5 finite elements, thus giving rise to three types of systems, viz., system 1 (beam divided into 3 finite elements), system 2 (4 finite elements), system 3 (5 finite elements). The effect of placing the sensor / actuator at various locations along the length of the beam for all the 3 types of systems considered is observed and the conclusions are drawn for the best performance and for the smallest magnitude of the control input required to control the vibrations of the beam. Simulations are performed in MATLAB. The open loop responses, closed loop responses and the tip displacements with and without the controller are obtained and the performance of the proposed smart system is evaluated for vibration control.

Keywords: Smart structure, Finite element method, State spacemodel, Euler-Bernoulli theory, SISO model, Fast output sampling, Vibration control, LMI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1820
103 Specific Biomarker Level and Function Outcome Changes in Treatment of Patients with Frozen Shoulder Using Dextrose Prolotherapy Injection

Authors: Nuralam Sam, Irawan Yusuf, Irfan Idris, Endi Adnan

Abstract:

Frozen shoulder (FS) is an insidious, painful condition caused by an inflammatory condition that causes fibrosis of the glenohumeral joint capsule, which causes progressive stiffness and restriction of the active and passive range of motion (ROM) of the shoulder. The studies of FS are still limited. This single-blinded randomized controlled trial involved participants with FS. The study participants were divided into two groups. The Prolotherapy group was the study group, and the Normal Saline (NS) group was the control group. Both groups were given injections at weeks 0, 2, 4, and 6. Matrix Metalloproteinase-1 (MMP-1) and Tissue Inhibitor Metalloproteinase-1 (TIMP-1) were measured at week six and week 12 after the last injection. The Disabilities of The Arm, Shoulder, and Hand (DASH) Score and ROM were measured at weeks 0, 2, 4, and 6 before and after injection and week 12. Comparative analysis was performed using repeated measures Paired T-Test, and data processing to assess correlation was using ANOVA. The result showed a significant decrease in The Disability of the Arm, Shoulder, and Hand (DASH) score in prolotherapy injection patients in each measurement week (p < 0.05). While the measurement of ROM, each direction of shoulder motion showed a significant difference in average each week, from week 0 to week 6 (p < 0.05). Dextrose prolotherapy injection results significantly improved the functional outcome of the shoulder joint and ROM. They did not show significant results in assessing the specific biomarker, MMP-1, and TIMP-1, in tissue repair. This study suggests an alternative to injection prolotherapy in FS patients; it has minimal adverse effects and is efficient in time and cost.

Keywords: Frozen Shoulder, ROM, DASH Score, prolotherapy, MMP-1, TIMP-1.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 474
102 Safe, Effective, and Cost-Efficient Air Cleaning for Populated Rooms and Entire Buildings Based on the Disinfecting Power of Vaporized Hypochlorous Acid

Authors: D. Boecker, R. Breves, F. Herth, Z. Zhang, C. Bulitta

Abstract:

Pathogen-carrying aerosol particles are recognized as important infection carriers like those in the current Corona pandemic. This infection route is often underestimated yet represents the infection route that has been least systematically countered to date. Particularly, the transmission indoors is of the highest concern but current indoor safety measures (e.g.: distancing, masks, filters) provide only limited protection. Inhalation of hypochlorous acid (HOCl) containing aerosols may become an alternate route to attack the incubating microbes in-situ and so potentially lead to a reduction of symptoms of already infected individuals. We investigated a facility-wide air-disinfection concept utilizing the potential of vaporized HOCl to become a disinfecting agent for populated indoor atmospheres. Aerosolized bacterial microbes were used as surrogates for a viral contamination, particularly the enveloped coronavirus. For the room air purification tests we aerosolized bacterial suspensions into lab chambers preloaded with vaporized HOCl solutions. Concentration of ‘free active chlorine’ in the test chamber atmosphere was determined with a special gas sensor system (Draeger AG, Lübeck, Germany) controlling the amount of vaporized HOCl via an aerosolis® device (oji Europe GmbH, Nauen, Germany). We could confirm the disinfecting power of HOCl in suspensions and determined the high efficacy of vaporized HOCl to disinfect atmospheres of populated indoor places at safe and non-irritant levels.

Keywords: Hypochlorous acid, HOCl, indoor air cleaning, infection control, microbial air burden, protective atmosphere.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 427
101 ZigBee Wireless Sensor Nodes with Hybrid Energy Storage System Based On Li-ion Battery and Solar Energy Supply

Authors: Chia-Chi Chang, Chuan-Bi Lin, Chia-Min Chan

Abstract:

Most ZigBee sensor networks to date make use of nodes with limited processing, communication, and energy capabilities. Energy consumption is of great importance in wireless sensor applications as their nodes are commonly battery-driven. Once ZigBee nodes are deployed outdoors, limited power may make a sensor network useless before its purpose is complete. At present, there are two strategies for long node and network lifetime. The first strategy is saving energy as much as possible. The energy consumption will be minimized through switching the node from active mode to sleep mode and routing protocol with ultra-low energy consumption. The second strategy is to evaluate the energy consumption of sensor applications as accurately as possible. Erroneous energy model may render a ZigBee sensor network useless before changing batteries.

In this paper, we present a ZigBee wireless sensor node with four key modules: a processing and radio unit, an energy harvesting unit, an energy storage unit, and a sensor unit. The processing unit uses CC2530 for controlling the sensor, carrying out routing protocol, and performing wireless communication with other nodes. The harvesting unit uses a 2W solar panel to provide lasting energy for the node. The storage unit consists of a rechargeable 1200 mAh Li-ion battery and a battery charger using a constant-current/constant-voltage algorithm. Our solution to extend node lifetime is implemented. Finally, a long-term sensor network test is used to exhibit the functionality of the solar powered system.

Keywords: ZigBee, Li-ion battery, solar panel, CC2530.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3091
100 Currency Boards in Crisis: Experience of Baltic Countries

Authors: Gordana Kordić, Petra Palić

Abstract:

The European countries that during the past two decades based their exchange rate regimes on currency board arrangement (CBA) are usually analysed from the perspective of corner solution choice’s stabilisation effects. There is an open discussion on the positive and negative background of a strict exchange rate regime choice, although it should be seen as part of the transition process towards the monetary union membership. The focus of the paper is on the Baltic countries that after two decades of a rigid exchange rate arrangement and strongly influenced by global crisis are finishing their path towards the euro zone. Besides the stabilising capacity, the CBA is highly vulnerable regime, with limited developing potential. The rigidity of the exchange rate (and monetary) system, despite the ensured credibility, do not leave enough (or any) space for the adjustment and/or active crisis management. Still, the Baltics are in a process of recovery, with fiscal consolidation measures combined with (painful and politically unpopular) measures of internal devaluation. Today, two of them (Estonia and Latvia) are members of euro zone, fulfilling their ultimate transition targets, but de facto exchanging one fixed regime with another. The paper analyses the challenges for the CBA in unstable environment since the fixed regimes rely on imported stability and are sensitive to external shocks. With limited monetary instruments, these countries were oriented to the fiscal policies and used a combination of internal devaluation and tax policy measures. Despite their rather quick recovery, our second goal is to analyse the long term influence that the measures had on the national economy.

Keywords: Currency Board Arrangement, internal devaluation, exchange rate regime, Great recession.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2046
99 Experimental Simulation Set-Up for Validating Out-Of-The-Loop Mitigation when Monitoring High Levels of Automation in Air Traffic Control

Authors: Oliver Ohneiser, Francesca De Crescenzio, Gianluca Di Flumeri, Jan Kraemer, Bruno Berberian, Sara Bagassi, Nicolina Sciaraffa, Pietro Aricò, Gianluca Borghini, Fabio Babiloni

Abstract:

An increasing degree of automation in air traffic will also change the role of the air traffic controller (ATCO). ATCOs will fulfill significantly more monitoring tasks compared to today. However, this rather passive role may lead to Out-Of-The-Loop (OOTL) effects comprising vigilance decrement and less situation awareness. The project MINIMA (Mitigating Negative Impacts of Monitoring high levels of Automation) has conceived a system to control and mitigate such OOTL phenomena. In order to demonstrate the MINIMA concept, an experimental simulation set-up has been designed. This set-up consists of two parts: 1) a Task Environment (TE) comprising a Terminal Maneuvering Area (TMA) simulator as well as 2) a Vigilance and Attention Controller (VAC) based on neurophysiological data recording such as electroencephalography (EEG) and eye-tracking devices. The current vigilance level and the attention focus of the controller are measured during the ATCO’s active work in front of the human machine interface (HMI). The derived vigilance level and attention trigger adaptive automation functionalities in the TE to avoid OOTL effects. This paper describes the full-scale experimental set-up and the component development work towards it. Hence, it encompasses a pre-test whose results influenced the development of the VAC as well as the functionalities of the final TE and the two VAC’s sub-components.

Keywords: Automation, human factors, air traffic controller, MINIMA, OOTL, Out-Of-The-Loop, EEG, electroencephalography, HMI, human machine interface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1452
98 A Structured Mechanism for Identifying Political Influencers on Social Media Platforms: Top 10 Saudi Political Twitter Users

Authors: Ahmad Alsolami, Darren Mundy, Manuel Hernandez-Perez

Abstract:

Social media networks, such as Twitter, offer the perfect opportunity to either positively or negatively affect political attitudes on large audiences. The existence of influential users who have developed a reputation for their knowledge and experience of specific topics is a major factor contributing to this impact. Therefore, knowledge of the mechanisms to identify influential users on social media is vital for understanding their effect on their audience. The concept of the influential user is related to the concept of opinion leaders' to indicate that ideas first flow from mass media to opinion leaders and then to the rest of the population. Hence, the objective of this research was to provide reliable and accurate structural mechanisms to identify influential users, which could be applied to different platforms, places, and subjects. Twitter was selected as the platform of interest, and Saudi Arabia as the context for the investigation. These were selected because Saudi Arabia has a large number of Twitter users, some of whom are considerably active in setting agendas and disseminating ideas. The study considered the scientific methods that have been used to identify public opinion leaders before, utilizing metrics software on Twitter. The key findings propose multiple novel metrics to compare Twitter influencers, including the number of followers, social authority and the use of political hashtags, and four secondary filtering measures. Thus, using ratio and percentage calculations to classify the most influential users, Twitter accounts were filtered, analyzed and included. The structured approach is used as a mechanism to explore the top ten influencers on Twitter from the political domain in Saudi Arabia.

Keywords: Twitter, influencers, structured mechanism, Saudi Arabia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 531