Search results for: intermediate input source
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7085

Search results for: intermediate input source

5225 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide

Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu

Abstract:

This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.

Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide

Procedia PDF Downloads 231
5224 Experimental Analyses of Thermoelectric Generator Behavior Using Two Types of Thermoelectric Modules for Marine Application

Authors: A. Nour Eddine, D. Chalet, L. Aixala, P. Chessé, X. Faure, N. Hatat

Abstract:

Thermal power technology such as the TEG (Thermo-Electric Generator) arouses significant attention worldwide for waste heat recovery. Despite the potential benefits of marine application due to the permanent heat sink from sea water, no significant studies on this application were to be found. In this study, a test rig has been designed and built to test the performance of the TEG on engine operating points. The TEG device is built from commercially available materials for the sake of possible economical application. Two types of commercial TEM (thermo electric module) have been studied separately on the test rig. The engine data were extracted from a commercial Diesel engine since it shares the same principle in terms of engine efficiency and exhaust with the marine Diesel engine. An open circuit water cooling system is used to replicate the sea water cold source. The characterization tests showed that the silicium-germanium alloys TEM proved a remarkable reliability on all engine operating points, with no significant deterioration of performance even under sever variation in the hot source conditions. The performance of the bismuth-telluride alloys was 100% better than the first type of TEM but it showed a deterioration in power generation when the air temperature exceeds 300 °C. The temperature distribution on the heat exchange surfaces revealed no useful combination of these two types of TEM with this tube length, since the surface temperature difference between both ends is no more than 10 °C. This study exposed the perspective of use of TEG technology for marine engine exhaust heat recovery. Although the results suggested non-sufficient power generation from the low cost commercial TEM used, it provides valuable information about TEG device optimization, including the design of heat exchanger and the types of thermo-electric materials.

Keywords: internal combustion engine application, Seebeck, thermo-electricity, waste heat recovery

Procedia PDF Downloads 240
5223 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 253
5222 EMG Based Orthosis for Upper Limb Rehabilitation in Hemiparesis Patients

Authors: Nancy N. Sharmila, Aparna Mishra

Abstract:

Hemiparesis affects almost 80% of stroke patients each year. It is marked by paralysis or weakness on one half of the body. Our model provides both assistance and physical therapy for hemiparesis patients for swift recovery. In order to accomplish our goal a force is provided that pulls the forearm up (as in flexing the arm), and pushes the forearm down (as in extending the arm), which will also assist the user during ADL (Activities of Daily Living). The model consists of a mechanical component which is placed around the patient’s bicep and an EMG control circuit to assist patients in daily activities, which makes it affordable and easy to use. In order to enhance the neuromuscular system’s effectiveness in synchronize the movement, proprioceptive neuromuscular facilitation (PNF) concept is used. The EMG signals are acquired from the unaffected arm as an input to drive the orthosis. This way the patient is invigorated to use the orthosis for regular exercise.

Keywords: EMG, hemiparesis, orthosis, rehabilitation

Procedia PDF Downloads 438
5221 Language Activation Theory: Unlocking Bilingual Language Processing

Authors: Leorisyl D. Siarot

Abstract:

It is conventional to see and hear Filipinos, in general, speak two or more languages. This phenomenon brings us to a closer look on how our minds process the input and produce an output with a specific chosen language. This study aimed to generate a theoretical model which explained the interaction of the first and the second languages in the human mind. After a careful analysis of the gathered data, a theoretical prototype called Language Activation Model was generated. For every string, there are three specialized banks: lexico-semantics, morphono-syntax, and pragmatics. These banks are interrelated to other banks of other language strings. As the bilingual learns more languages, a new string is replicated and is filled up with the information of the new language learned. The principles of the first and second languages' interaction are drawn; these are expressed in laws, namely: law of dominance, law of availability, law of usuality and law of preference. Furthermore, difficulties encountered in the learning of second languages were also determined.

Keywords: bilingualism, psycholinguistics, second language learning, languages

Procedia PDF Downloads 507
5220 Development of a Plant-Based Dietary Supplement to Address Critical Micronutrient Needs of Women of Child-Bearing Age in Europe

Authors: Sara D. Garduno-Diaz, Ramona Milcheva, Chanyu Xu

Abstract:

Women’s reproductive stages (pre-pregnancy, pregnancy, and lactation) represent a time of higher micronutrient needs. With a healthy food selection as the first path of choice to cover these increased needs, tandem micronutrient supplementation is often required. Because pregnancy and lactation should be treated with care, all supplements consumed should be of quality ingredients and manufactured through controlled processes. This work describes the process followed for the development of plant-based multiple micronutrient supplements aimed at addressing the growing demand for natural ingredients of non-animal origin. A list of key nutrients for inclusion was prioritized, followed by the identification and selection of qualified raw ingredient providers. Nutrient absorption into the food matrix was carried out through natural processes. The outcome is a new line of products meeting the set criteria of being gluten and lactose-free, suitable for vegans/vegetarians, and without artificial conservatives. In addition, each product provides the consumer with 10 vitamins, 6 inorganic nutrients, 1 source of essential fatty acids, and 1 source of phytonutrients each (maca, moringa, and chlorella). Each raw material, as well as the final product, was submitted to microbiological control three-fold (in-house and external). The final micronutrient mix was then tested for human factor contamination, pesticides, total aerobic microbial count, total yeast count, and total mold count. The product was created with the aim of meeting product standards for the European Union, as well as specific requirements for the German market in the food and pharma fields. The results presented here reach the point of introduction of the newly developed product to the market, with acceptability and effectiveness results to be published at a later date.

Keywords: fertility, lactation, organic, pregnancy, vegetarian

Procedia PDF Downloads 141
5219 Production and Purification of Pectinase by Aspergillus Niger

Authors: M. Umar Dahot, G. S. Mangrio

Abstract:

In this study Agro-industrial waste was used as a carbon source, which is a low cost substrate. Along with this, various sugars and molasses of 2.5% and 5% were investigated as substrate/carbon source for the growth of A.niger and Pectinase production. Different nitrogen sources were also used. An overview of results obtained show that 5% sucrose, 5% molasses and 0.4% (NH4)2SO4 were found the best carbon and nitrogen sources for the production of pectinase by A. niger. The maximum production of pectinase (26.87units/ml) was observed at pH 6.0 after 72 hrs incubation. The optimum temperature for the maximum production of pectinase was achieved at 35ºC when maximum production of pectinase was obtained as 28.25Units/ml.Pectinase enzyme was purified with ammonium sulphate precipitation and dialyzed sample was finally applied on gel filtration chromatography (Sephadex G-100) and Ion Exchange DEAE A-50. The enzyme was purified 2.5 fold by gel chromatography on Sephadex G-100 and Four fractions were obtained, Fraction 1, 2, 4 showed single band while Fraction -3 showed multiple bands on SDS Page electrophoresis. Fraction -3 was pooled, dialyzed and separated on Sephdex A-50 and two fractions 3a and 3b showed single band. The molecular weights of the purified fractions were detected in the range of 33000 ± 2000 and 38000± 2000 Daltons. The purified enzyme was specifically most active with pure pectin, while pectin, Lemon pectin and orange peel given lower activity as compared to (control). The optimum pH and temperature for pectinase activity was found between pH 5.0 and 6.0 and 40°- 50°C, respectively. The enzyme was stable over the pH range 3.0-8.0. The thermostability of was determined and it was observed that the pectinase activity is heat stable and retains activity more than 40% when incubated at 90°C for 10 minutes. The pectinase activity of F3a and F3b was increased with different metal ions. The Pectinase activity was stimulated in the presence of CaCl2 up to 10-30%. ZnSO4, MnSO4 and Mg SO4 showed higher activity in fractions F3a and F3b, which indicates that the pectinase belongs to metalo-enzymes. It is concluded that A. niger is capable to produce pH stable and thermostable pectinase, which can be used for industrial purposes.

Keywords: pectinase, a. niger, production, purification, characterization

Procedia PDF Downloads 406
5218 Experimental Study of Boost Converter Based PV Energy System

Authors: T. Abdelkrim, K. Ben Seddik, B. Bezza, K. Benamrane, Aeh. Benkhelifa

Abstract:

This paper proposes an implementation of boost converter for a resistive load using photovoltaic energy as a source. The model of photovoltaic cell and operating principle of boost converter are presented. A PIC micro controller is used in the close loop control to generate pulses for controlling the converter circuit. To performance evaluation of boost converter, a variation of output voltage of PV panel is done by shading one and two cells.

Keywords: boost converter, microcontroller, photovoltaic power generation, shading cells

Procedia PDF Downloads 868
5217 Clinical Profile and Outcome of Type I Diabetes Mellitus at a Tertiary Care-Centre in Eastern Nepal

Authors: Gauri Shankar Shah

Abstract:

Objectives: The Type I diabetes mellitus in children is frequently a missed diagnosis and children presents in emergency with diabetic ketoacidosis having significant morbidity and mortality. The present study was done to find out the clinical presentation and outcome at a tertiary-care centre. Methods: This was retrospective analysis of data of Type I diabetes mellitus reporting to our centre during last one year (2012-2013). Results: There were 12 patients (8 males) and the age group was 4-14 years (mean ± 3.7). The presenting symptoms were fever, vomiting, altered sensorium and fast breathing in 8 (66.6%), 6 (50%), 4 (33.3%), and 4 (33.3%) cases, respectively. The classical triad of polyuria, polydypsia, and polyphagia were present only in two patients (33.2%). Seizures and epigastric pain were found in two cases each (33.2%). The four cases (33.3%) presented with diabetic ketoacidosis due to discontinuation of insulin doses, while 2 had hyperglycemia alone. The hemogram revealed mean hemoglobin of 12.1± 1.6 g/dL and total leukocyte count was 22,883.3 ± 10,345.9 per mm3, with polymorphs percentage of 73.1 ± 9.0%. The mean blood sugar at presentation was 740 ± 277 mg/ dl (544–1240). HbA1c ranged between 7.1-8.8 with mean of 8.1±0.6 %. The mean sodium, potassium, blood ph, pCO2, pO2 and bicarbonate were 140.8 ± 6.9 mEq/L, 4.4 ± 1.8mEq/L, 7.0 ± 0.2, 20.2 ± 10.8 mmHg, 112.6 ± 46.5 mmHg and 9.2 ± 8.8 mEq/L, respectively. All the patients were managed in pediatric intensive care unit as per our protocol, recovered and discharged on intermediate insulin given twice daily. Conclusions: Thus, it shows that these patients have uncontrolled hyperglycemia and often presents in emergency with ketoacidosis and deranged biochemical profile. The regular administration of insulin, frequent monitoring of blood sugar and health education are required to have better metabolic control and good quality of life.

Keywords: type I diabetes mellitus, hyperglycemia, outcome, glycemic control

Procedia PDF Downloads 252
5216 Lexical-Semantic Processing by Chinese as a Second Language Learners

Authors: Yi-Hsiu Lai

Abstract:

The present study aimed to elucidate the lexical-semantic processing for Chinese as second language (CSL) learners. Twenty L1 speakers of Chinese and twenty CSL learners in Taiwan participated in a picture naming task and a category fluency task. Based on their Chinese proficiency levels, these CSL learners were further divided into two sub-groups: ten CSL learners of elementary Chinese proficiency level and ten CSL learners of intermediate Chinese proficiency level. Instruments for the naming task were sixty black-and-white pictures: thirty-five object pictures and twenty-five action pictures. Object pictures were divided into two categories: living objects and non-living objects. Action pictures were composed of two categories: action verbs and process verbs. As in the naming task, the category fluency task consisted of two semantic categories – objects (i.e., living and non-living objects) and actions (i.e., action and process verbs). Participants were asked to report as many items within a category as possible in one minute. Oral productions were tape-recorded and transcribed for further analysis. Both error types and error frequency were calculated. Statistical analysis was further conducted to examine these error types and frequency made by CSL learners. Additionally, category effects, pictorial effects and L2 proficiency were discussed. Findings in the present study helped characterize the lexical-semantic process of Chinese naming in CSL learners of different Chinese proficiency levels and made contributions to Chinese vocabulary teaching and learning in the future.

Keywords: lexical-semantic processing, Mandarin Chinese, naming, category effects

Procedia PDF Downloads 456
5215 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 141
5214 Aseismic Stiffening of Architectural Buildings as Preventive Restoration Using Unconventional Materials

Authors: Jefto Terzovic, Ana Kontic, Isidora Ilic

Abstract:

In the proposed design concept, laminated glass and laminated plexiglass, as ”unconventional materials”, are considered as a filling in a steel frame on which they overlap by the intermediate rubber layer, thereby forming a composite assembly. In this way vertical elements of stiffening are formed, capable for reception of seismic force and integrated into the structural system of the building. The applicability of such a system was verified by experiments in laboratory conditions where the experimental models based on laminated glass and laminated plexiglass had been exposed to the cyclic loads that simulate the seismic force. In this way the load capacity of composite assemblies was tested for the effects of dynamic load that was parallel to assembly plane. Thus, the stress intensity to which composite systems might be exposed was determined as well as the range of the structure stiffening referring to the expressed deformation along with the advantages of a particular type of filling compared to the other one. Using specialized software whose operation is based on the finite element method, a computer model of the structure was created and processed in the case study; the same computer model was used for analyzing the problem in the first phase of the design process. The stiffening system based on composite assemblies tested in laboratories is implemented in the computer model. The results of the modal analysis and seismic calculation from the computer model with stiffeners applied showed an efficacy of such a solution, thus rounding the design procedures for aseismic stiffening by using unconventional materials.

Keywords: laminated glass, laminated plexiglass, aseismic stiffening, experiment, laboratory testing, computer model, finite element method

Procedia PDF Downloads 75
5213 The Routes of Human Suffering: How Point-Source and Destination-Source Mapping Can Help Victim Services Providers and Law Enforcement Agencies Effectively Combat Human Trafficking

Authors: Benjamin Thomas Greer, Grace Cotulla, Mandy Johnson

Abstract:

Human trafficking is one of the fastest growing international crimes and human rights violations in the world. The United States Department of State (State Department) approximates some 800,000 to 900,000 people are annually trafficked across sovereign borders, with approximately 14,000 to 17,500 of these people coming into the United States. Today’s slavery is conducted by unscrupulous individuals who are often connected to organized criminal enterprises and transnational gangs, extracting huge monetary sums. According to the International Labour Organization (ILO), human traffickers collect approximately $32 billion worldwide annually. Surpassed only by narcotics dealing, trafficking of humans is tied with illegal arms sales as the second largest criminal industry in the world and is the fastest growing field in the 21st century. Perpetrators of this heinous crime abound. They are not limited to single or “sole practitioners” of human trafficking, but rather, often include Transnational Criminal Organizations (TCO), domestic street gangs, labor contractors, and otherwise seemingly ordinary citizens. Monetary gain is being elevated over territorial disputes and street gangs are increasingly operating in a collaborative effort with TCOs to further disguise their criminal activity; to utilizing their vast networks, in an attempt to avoid detection. Traffickers rely on a network of clandestine routes to sell their commodities with impunity. As law enforcement agencies seek to retard the expansion of transnational criminal organization’s entry into human trafficking, it is imperative that they develop reliable trafficking mapping of known exploitative routes. In a recent report given to the Mexican Congress, The Procuraduría General de la República (PGR) disclosed, from 2008 to 2010 they had identified at least 47 unique criminal networking routes used to traffic victims and that Mexico’s estimated domestic victims number between 800,000 adults and 20,000 children annually. Designing a reliable mapping system is a crucial step to effective law enforcement response and deploying a successful victim support system. Creating this mapping analytic is exceedingly difficult. Traffickers are constantly changing the way they traffic and exploit their victims. They swiftly adapt to local environmental factors and react remarkably well to market demands, exploiting limitations in the prevailing laws. This article will highlight how human trafficking has become one of the fastest growing and most high profile human rights violations in the world today; compile current efforts to map and illustrate trafficking routes; and will demonstrate how the proprietary analytical mapping analysis of point-source and destination-source mapping can help local law enforcement, governmental agencies and victim services providers effectively respond to the type and nature of trafficking to their specific geographical locale. Trafficking transcends state and international borders. It demands an effective and consistent cooperation between local, state, and federal authorities. Each region of the world has different impact factors which create distinct challenges for law enforcement and victim services. Our mapping system lays the groundwork for a targeted anti-trafficking response.

Keywords: human trafficking, mapping, routes, law enforcement intelligence

Procedia PDF Downloads 373
5212 Design of Reconfigurable Fixed-Point LMS Adaptive FIR Filter

Authors: S. Padmapriya, V. Lakshmi Prabha

Abstract:

In this paper, an efficient reconfigurable fixed-point Least Mean Square Adaptive FIR filter is proposed. The proposed architecture has two methods of operation: one is area efficient design and the other is optimized power. Pipelining of the adder blocks and partial product generator are used to achieve low area and reversible logic is used to obtain low power design. Depending upon the input samples and filter coefficients, one of the techniques is chosen. Least-Mean-Square adaptation is performed to update the weights. The architecture is coded using Verilog and synthesized in cadence encounter 0.18μm technology. The synthesized results show that the area reduction ratio of the proposed when compared with conventional technique is about 1.2%.

Keywords: adaptive filter, carry select adder, least mean square algorithm, reversible logic

Procedia PDF Downloads 322
5211 Scenario Analysis to Assess the Competitiveness of Hydrogen in Securing the Italian Energy System

Authors: Gianvito Colucci, Valeria Di Cosmo, Matteo Nicoli, Orsola Maria Robasto, Laura Savoldi

Abstract:

The hydrogen value chain deployment is likely to be boosted in the near term by the energy security measures planned by European countries to face the recent energy crisis. In this context, some countries are recognized to have a crucial role in the geopolitics of hydrogen as importers, consumers and exporters. According to the European Hydrogen Backbone Initiative, Italy would be part of one of the 5 corridors that will shape the European hydrogen market. However, the set targets are very ambitious and require large investments to rapidly develop effective hydrogen policies: in this regard, scenario analysis is becoming increasingly important to support energy planning, and energy system optimization models appear to be suitable tools to quantitively carry on that kind of analysis. The work aims to assess the competitiveness of hydrogen in contributing to the Italian energy security in the coming years, under different price and import conditions, using the energy system model TEMOA-Italy. A wide spectrum of hydrogen technologies is included in the analysis, covering the production, storage, delivery, and end-uses stages. National production from fossil fuels with and without CCS, as well as electrolysis and import of low-carbon hydrogen from North Africa, are the supply solutions that would compete with other ones, such as natural gas, biomethane and electricity value chains, to satisfy sectoral energy needs (transport, industry, buildings, agriculture). Scenario analysis is then used to study the competition under different price and import conditions. The use of TEMOA-Italy allows the work to catch the interaction between the economy and technological detail, which is much needed in the energy policies assessment, while the transparency of the analysis and of the results is ensured by the full accessibility of the TEMOA open-source modeling framework.

Keywords: energy security, energy system optimization models, hydrogen, natural gas, open-source modeling, scenario analysis, TEMOA

Procedia PDF Downloads 106
5210 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 67
5209 Impact of Fermentation Time and Microbial Source on Physicochemical Properties, Total Phenols and Antioxidant Activity of Finger Millet Malt Beverage

Authors: Henry O. Udeha, Kwaku G. Duodub, Afam I. O. Jideanic

Abstract:

Finger millet (FM) [Eleusine coracana] is considered as a potential ‘‘super grain’’ by the United States National Academies as one of the most nutritious among all the major cereals. The regular consumption of FM-based diets has been associated with reduced risk of diabetes, cataract and gastrointestinal tract disorder. Hyperglycaemic, hypocholesterolaemic and anticataractogenic, and other health improvement properties have been reported. This study examined the effect of fermentation time and microbial source on physicochemical properties, phenolic compounds and antioxidant activity of two finger millet (FM) malt flours. Sorghum was used as an external reference. The grains were malted, mashed and fermented using the grain microflora and Lactobacillus fermentum. The phenolic compounds of the resulting beverage were identified and quantified using ultra-performance liquid chromatography (UPLC) and mass spectrometer system (MS). A fermentation-time dependent decrease in pH and viscosities of the beverages, with a corresponding increase in sugar content were noted. The phenolic compounds found in the FM beverages were protocatechuic acid, catechin and epicatechin. Decrease in total phenolics of the beverages was observed with increased fermentation time. The beverages exhibited 2, 2-diphenyl-1-picrylhydrazyl, 2, 2՛-azinobis-3-ethylbenzthiazoline-6-sulfonic acid radical scavenging action and iron reducing activities, which were significantly (p < 0.05) reduced at 96 h fermentation for both microbial sources. The 24 h fermented beverages retained a higher amount of total phenolics and had higher antioxidant activity compared to other fermentation periods. The study demonstrates that FM could be utilised as a functional grain in the production of non-alcoholic beverage with important phenolic compounds for health promotion and wellness.

Keywords: antioxidant activity, eleusine coracana, fermentation, phenolic compounds

Procedia PDF Downloads 101
5208 The Combined Effect of Different Levels of Fe(III) in Diet and Cr(III) Supplementation on the Ca Status in Wistar

Authors: Staniek Halina

Abstract:

The inappropriate trace elements supply such as iron(III) and chromium(III) may be risk factors of many metabolic disorders (e.g., anemia, diabetes, as well cause toxic effect). However, little is known about their mutual interactions and their impact on these disturbances. The effects of Cr(III) supplementation with a deficit or excess supply of Fe(III) in vivo conditions are not known yet. The objective of the study was to investigate the combined effect of different Fe(III) levels in the diet and simultaneous Cr(III) supplementation on the Ca distribution in organs in healthy rats. The assessment was based on a two-factor (2x3) experiment carried out on 54 female Wistar rats (Rattus norvegicus). The animals were randomly divided into 9 groups and for 6 weeks, they were fed semi-purified diets AIN-93 with three different Fe(III) levels in the diet as a factor A [control (C) 45 mg/kg (100% Recommended Daily Allowance for rodents), deficient (D) 5 mg/kg (10% RDA), and oversupply (H) 180 mg/kg (400% RDA)]. The second factor (B) was the simultaneous dietary supplementation with Cr(III) at doses of 1, 50 and 500 mg/kg of the diet. Iron(III) citrate was the source of Fe(III). The complex of Cr(III) with propionic acid, also called Cr₃ or chromium(III) propionate (CrProp), was used as a source of Cr(III) in the diet. The Ca content of analysed samples (liver, kidneys, spleen, heart, and femur) was determined with the Atomic Absorption Spectrometry (AAS) method. It was found that different dietary Fe(III) supply as well as Cr(III) supplementation independently and in combination influenced Ca metabolism in healthy rats. Regardless of the supplementation of Cr(III), the oversupply of Fe(III) (180 mg/kg) decreased the Ca content in the liver and kidneys, while it increased the Ca saturation of bone tissue. High Cr(III) doses lowered the hepatic Ca content. Moreover, it tended to decrease the Ca content in the kidneys and heart, but this effect was not statistically significant. The combined effect of the experimental factors on the Ca content in the liver and the femur was observed. With the increase in the Fe(III) content in the diet, there was a decrease in the Ca level in the liver and an increase in bone saturation, and the additional Cr(III) supplementation intensified those effects. The study proved that the different Fe(III) content in the diet, independently and in combination with Cr(III) supplementation, affected the Ca distribution in organisms of healthy rats.

Keywords: calcium, chromium(III), iron(III), rats, supplementation

Procedia PDF Downloads 189
5207 Unsupervised Detection of Burned Area from Remote Sensing Images Using Spatial Correlation and Fuzzy Clustering

Authors: Tauqir A. Moughal, Fusheng Yu, Abeer Mazher

Abstract:

Land-cover and land-use change information are important because of their practical uses in various applications, including deforestation, damage assessment, disasters monitoring, urban expansion, planning, and land management. Therefore, developing change detection methods for remote sensing images is an important ongoing research agenda. However, detection of change through optical remote sensing images is not a trivial task due to many factors including the vagueness between the boundaries of changed and unchanged regions and spatial dependence of the pixels to its neighborhood. In this paper, we propose a binary change detection technique for bi-temporal optical remote sensing images. As in most of the optical remote sensing images, the transition between the two clusters (change and no change) is overlapping and the existing methods are incapable of providing the accurate cluster boundaries. In this regard, a methodology has been proposed which uses the fuzzy c-means clustering to tackle the problem of vagueness in the changed and unchanged class by formulating the soft boundaries between them. Furthermore, in order to exploit the neighborhood information of the pixels, the input patterns are generated corresponding to each pixel from bi-temporal images using 3×3, 5×5 and 7×7 window. The between images and within image spatial dependence of the pixels to its neighborhood is quantified by using Pearson product moment correlation and Moran’s I statistics, respectively. The proposed technique consists of two phases. At first, between images and within image spatial correlation is calculated to utilize the information that the pixels at different locations may not be independent. Second, fuzzy c-means technique is used to produce two clusters from input feature by not only taking care of vagueness between the changed and unchanged class but also by exploiting the spatial correlation of the pixels. To show the effectiveness of the proposed technique, experiments are conducted on multispectral and bi-temporal remote sensing images. A subset (2100×1212 pixels) of a pan-sharpened, bi-temporal Landsat 5 thematic mapper optical image of Los Angeles, California, is used in this study which shows a long period of the forest fire continued from July until October 2009. Early forest fire and later forest fire optical remote sensing images were acquired on July 5, 2009 and October 25, 2009, respectively. The proposed technique is used to detect the fire (which causes change on earth’s surface) and compared with the existing K-means clustering technique. Experimental results showed that proposed technique performs better than the already existing technique. The proposed technique can be easily extendable for optical hyperspectral images and is suitable for many practical applications.

Keywords: burned area, change detection, correlation, fuzzy clustering, optical remote sensing

Procedia PDF Downloads 164
5206 Polymeric Microspheres for Bone Tissue Engineering

Authors: Yamina Boukari, Nashiru Billa, Andrew Morris, Stephen Doughty, Kevin Shakesheff

Abstract:

Poly (lactic-co-glycolic) acid (PLGA) is a synthetic polymer that can be used in bone tissue engineering with the aim of creating a scaffold in order to support the growth of cells. The formation of microspheres from this polymer is an attractive strategy that would allow for the development of an injectable system, hence avoiding invasive surgical procedures. The aim of this study was to develop a microsphere delivery system for use as an injectable scaffold in bone tissue engineering and evaluate various formulation parameters on its properties. Porous and lysozyme-containing PLGA microspheres were prepared using the double emulsion solvent evaporation method from various molecular weights (MW). Scaffolds were formed by sintering to contain 1 -3mg of lysozyme per gram of scaffold. The mechanical and physical properties of the scaffolds were assessed along with the release of lysozyme, which was used as a model protein. The MW of PLGA was found to have an influence on microsphere size during fabrication, with increased MW leading to an increased microsphere diameter. An inversely proportional relationship was displayed between PLGA MW and mechanical strength of formed scaffolds across loadings for low, intermediate and high MW respectively. Lysozyme release from both microspheres and formed scaffolds showed an initial burst release phase, with both microspheres and scaffolds fabricated using high MW PLGA showing the lowest protein release. Following the initial burst phase, the profiles for each MW followed a similar slow release over 30 days. Overall, the results of this study demonstrate that lysozyme can be successfully incorporated into porous PLGA scaffolds and released over 30 days in vitro, and that varying the MW of the PLGA can be used as a method of altering the physical properties of the resulting scaffolds.

Keywords: bone, microspheres, PLGA, tissue engineering

Procedia PDF Downloads 421
5205 Direct-Displacement Based Design for Buildings with Non-Linear Viscous Dampers

Authors: Kelly F. Delgado-De Agrela, Sonia E. Ruiz, Marco A. Santos-Santiago

Abstract:

An approach is proposed for the design of regular buildings equipped with non-linear viscous dissipating devices. The approach is based on a direct-displacement seismic design method which satisfies seismic performance objectives. The global system involved is formed by structural regular moment frames capable of supporting gravity and lateral loads with elastic response behavior plus a set of non-linear viscous dissipating devices which reduce the structural seismic response. The dampers are characterized by two design parameters: (1) a positive real exponent α which represents the non-linearity of the damper, and (2) the damping coefficient C of the device, whose constitutive force-velocity law is given by F=Cvᵃ, where v is the velocity between the ends of the damper. The procedure is carried out using a substitute structure. Two limits states are verified: serviceability and near collapse. The reduction of the spectral ordinates by the additional damping assumed in the design process and introduced to the structure by the viscous non-linear dampers is performed according to a damping reduction factor. For the design of the non-linear damper system, the real velocity is considered instead of the pseudo-velocity. The proposed design methodology is applied to an 8-story steel moment frame building equipped with non-linear viscous dampers, located in intermediate soil zone of Mexico City, with a dominant period Tₛ = 1s. In order to validate the approach, nonlinear static analyses and nonlinear time history analyses are performed.

Keywords: based design, direct-displacement based design, non-linear viscous dampers, performance design

Procedia PDF Downloads 188
5204 Corporate Law and Its View Point of Locking in Capital

Authors: Saad Saeed Althiabi

Abstract:

This paper discusses the corporate positioning and how it became popular as a way to systematize production because of the unique manner in which incorporation legalized organizers to secure financial capital through locking it in. The power to lock in capital comes from the fact that a corporate exists as a separate legal entity, whose survival and governance are separated from any of its participants. The law essentially creates a different legal person when a corporation is created. Although this idea has been played down in the legal learning of the last decades in favor of the view that a corporation is purely something through which natural persons interrelate, recent legal research has begun to reassess the importance of entity status. Entity status, under the law and the related separation of governance from input of financial capital through the configuration of a corporation, sanctioned corporate participants to do somewhat more than connect in a series of business transactions.

Keywords: corporate law, entity status, locking in capital, financial capital

Procedia PDF Downloads 546
5203 Implementation and Modeling of a Quadrotor

Authors: Ersan Aktas, Eren Turanoğuz

Abstract:

In this study, the quad-electrical rotor driven unmanned aerial vehicle system is designed and modeled using fundamental dynamic equations. After that, mechanical, electronical and control system of the air vehicle are designed and implemented. Brushless motor speeds are altered via electronic speed controllers in order to achieve desired controllability. The vehicle's fundamental Euler angles (i.e., roll angle, pitch angle, and yaw angle) are obtained via AHRS sensor. These angles are provided as an input to the control algorithm that run on soft the processor on the electronic card. The vehicle control algorithm is implemented in the electronic card. Controller is designed and improved for each Euler angles. Finally, flight tests have been performed to observe and improve the flight characteristics.

Keywords: quadrotor, UAS applications, control architectures, PID

Procedia PDF Downloads 356
5202 Vibration Analysis of Stepped Nanoarches with Defects

Authors: Jaan Lellep, Shahid Mubasshar

Abstract:

A numerical solution is developed for simply supported nanoarches based on the non-local theory of elasticity. The nanoarch under consideration has a step-wise variable cross-section and is weakened by crack-like defects. It is assumed that the cracks are stationary and the mechanical behaviour of the nanoarch can be modeled by Eringen’s non-local theory of elasticity. The physical and thermal properties are sensitive with respect to changes of dimensions in the nano level. The classical theory of elasticity is unable to describe such changes in material properties. This is because, during the development of the classical theory of elasticity, the speculation of molecular objects was avoided. Therefore, the non-local theory of elasticity is applied to study the vibration of nanostructures and it has been accepted by many researchers. In the non-local theory of elasticity, it is assumed that the stress state of the body at a given point depends on the stress state of each point of the structure. However, within the classical theory of elasticity, the stress state of the body depends only on the given point. The system of main equations consists of equilibrium equations, geometrical relations and constitutive equations with boundary and intermediate conditions. The system of equations is solved by using the method of separation of variables. Consequently, the governing differential equations are converted into a system of algebraic equations whose solution exists if the determinant of the coefficients of the matrix vanishes. The influence of cracks and steps on the natural vibration of the nanoarches is prescribed with the aid of additional local compliance at the weakened cross-section. An algorithm to determine the eigenfrequencies of the nanoarches is developed with the help of computer software. The effects of various physical and geometrical parameters are recorded and drawn graphically.

Keywords: crack, nanoarches, natural frequency, step

Procedia PDF Downloads 126
5201 Homeostatic Analysis of the Integrated Insulin and Glucagon Signaling Network: Demonstration of Bistable Response in Catabolic and Anabolic States

Authors: Pramod Somvanshi, Manu Tomar, K. V. Venkatesh

Abstract:

Insulin and glucagon are responsible for homeostasis of key plasma metabolites like glucose, amino acids and fatty acids in the blood plasma. These hormones act antagonistically to each other during the secretion and signaling stages. In the present work, we analyze the effect of macronutrients on the response from integrated insulin and glucagon signaling pathways. The insulin and glucagon pathways are connected by DAG (a calcium signaling component which is part of the glucagon signaling module) which activates PKC and inhibits IRS (insulin signaling component) constituting a crosstalk. AKT (insulin signaling component) inhibits cAMP (glucagon signaling component) through PDE3 forming the other crosstalk between the two signaling pathways. Physiological level of anabolism and catabolism is captured through a metric quantified by the activity levels of AKT and PKA in their phosphorylated states, which represent the insulin and glucagon signaling endpoints, respectively. Under resting and starving conditions, the phosphorylation metric represents homeostasis indicating a balance between the anabolic and catabolic activities in the tissues. The steady state analysis of the integrated network demonstrates the presence of a bistable response in the phosphorylation metric with respect to input plasma glucose levels. This indicates that two steady state conditions (one in the homeostatic zone and other in the anabolic zone) are possible for a given glucose concentration depending on the ON or OFF path. When glucose levels rise above normal, during post-meal conditions, the bistability is observed in the anabolic space denoting the dominance of the glycogenesis in liver. For glucose concentrations lower than the physiological levels, while exercising, metabolic response lies in the catabolic space denoting the prevalence of glycogenolysis in liver. The non-linear positive feedback of AKT on IRS in insulin signaling module of the network is the main cause of the bistable response. The span of bistability in the phosphorylation metric increases as plasma fatty acid and amino acid levels rise and eventually the response turns monostable and catabolic representing diabetic conditions. In the case of high fat or protein diet, fatty acids and amino acids have an inhibitory effect on the insulin signaling pathway by increasing the serine phosphorylation of IRS protein via the activation of PKC and S6K, respectively. Similar analysis was also performed with respect to input amino acid and fatty acid levels. This emergent property of bistability in the integrated network helps us understand why it becomes extremely difficult to treat obesity and diabetes when blood glucose level rises beyond a certain value.

Keywords: bistability, diabetes, feedback and crosstalk, obesity

Procedia PDF Downloads 272
5200 Hand Motion and Gesture Control of Laboratory Test Equipment Using the Leap Motion Controller

Authors: Ian A. Grout

Abstract:

In this paper, the design and development of a system to provide hand motion and gesture control of laboratory test equipment is considered and discussed. The Leap Motion controller is used to provide an input to control a laboratory power supply as part of an electronic circuit experiment. By suitable hand motions and gestures, control of the power supply is provided remotely and without the need to physically touch the equipment used. As such, it provides an alternative manner in which to control electronic equipment via a PC and is considered here within the field of human computer interaction (HCI).

Keywords: control, hand gesture, human computer interaction, test equipment

Procedia PDF Downloads 308
5199 Designing and Simulation of a CMOS Square Root Analog Multiplier

Authors: Milad Kaboli

Abstract:

A new CMOS low voltage current-mode four-quadrant analog multiplier based on the squarer circuit with voltage output is presented. The proposed circuit is composed of a pair of current subtractors, a pair differential-input V-I converters and a pair of voltage squarers. The circuit was simulated using HSPICE simulator in standard 0.18 μm CMOS level 49 MOSIS (BSIM3 V3.2 SPICE-based). Simulation results show the performance of the proposed circuit and experimental results are given to confirm the operation. This topology of multiplier results in a high-frequency capability with low power consumption. The multiplier operates for a power supply ±1.2V. The simulation results of analog multiplier demonstrate a THD of 0.65% in 10MHz, a −3dB bandwidth of 1.39GHz, and a maximum power consumption of 7.1mW.

Keywords: analog processing circuit, WTA, LTA, low voltage

Procedia PDF Downloads 471
5198 Empirical Acceleration Functions and Fuzzy Information

Authors: Muhammad Shafiq

Abstract:

In accelerated life testing approaches life time data is obtained under various conditions which are considered more severe than usual condition. Classical techniques are based on obtained precise measurements, and used to model variation among the observations. In fact, there are two types of uncertainty in data: variation among the observations and the fuzziness. Analysis techniques, which do not consider fuzziness and are only based on precise life time observations, lead to pseudo results. This study was aimed to examine the behavior of empirical acceleration functions using fuzzy lifetimes data. The results showed an increased fuzziness in the transformed life times as compare to the input data.

Keywords: acceleration function, accelerated life testing, fuzzy number, non-precise data

Procedia PDF Downloads 290
5197 Investigation of Enterotoxigenic Staphylococcus aureus in Kitchen of Catering

Authors: Çiğdem Sezer, Aksem Aksoy, Leyla Vatansever

Abstract:

This study has been done for the purpose of evaluation of public health and identifying of enterotoxigenic Staphyloccocus aureus in kitchen of catering. In the kitchen of catering, samples have been taken by swabs from surface of equipments which are in the salad section, meat section and bakery section. Samples have been investigated with classical cultural methods in terms of Staphyloccocus aureus. Therefore, as a 10x10 cm area was identified (salad, cutting and chopping surfaces, knives, meat grinder, meat chopping surface) samples have been taken with sterile swabs with helping FTS from this area. In total, 50 samples were obtained. In aseptic conditions, Baird-Parker agar (with egg yolk tellurite) surface was seeded with swabs. After 24-48 hours of incubation at 37°C, the black colonies with 1-1.5 mm diameter and which are surrounded by a zone indicating lecithinase activity were identified as S. aureus after applying Gram staining, catalase, coagulase, glucose and mannitol fermentation and termonuclease tests. Genotypic characterization (Staphylococcus genus and S.aureus species spesific) of isolates was performed by PCR. The ELISA test was applied to the isolates for the identification of staphylococcal enterotoxins (SET) A, B, C, D, E in bacterial cultures. Measurements were taken at 450 nm in an ELISA reader using an Ridascreen-Total set ELISA test kit (r-biopharm R4105-Enterotoxin A, B, C, D, E). The results were calculated according to the manufacturer’s instructions. A total of 50 samples of 97 S. aureus was isolated. This number has been identified as 60 with PCR analysis. According to ELISA test, only 1 of 60 isolates were found to be enterotoxigenic. Enterotoxigenic strains were identified from the surface of salad chopping and cutting. In the kitchen of catering, S. aureus identification indicates a significant source of contamination. Especially, in raw consumed salad preparation phase of contamination is very important. This food can be a potential source of food-borne poisoning their terms, and they pose a significant risk to consumers have been identified.

Keywords: Staphylococcus aureus, enterotoxin, catering, kitchen, health

Procedia PDF Downloads 390
5196 Variability Studies of Seyfert Galaxies Using Sloan Digital Sky Survey and Wide-Field Infrared Survey Explorer Observations

Authors: Ayesha Anjum, Arbaz Basha

Abstract:

Active Galactic Nuclei (AGN) are the actively accreting centers of the galaxies that host supermassive black holes. AGN emits radiation in all wavelengths and also shows variability across all the wavelength bands. The analysis of flux variability tells us about the morphology of the site of emission radiation. Some of the major classifications of AGN are (a) Blazars, with featureless spectra. They are subclassified as BLLacertae objects, Flat Spectrum Radio Quasars (FSRQs), and others; (b) Seyferts with prominent emission line features are classified into Broad Line, Narrow Line Seyferts of Type 1 and Type 2 (c) quasars, and other types. Sloan Digital Sky Survey (SDSS) is an optical telescope based in Mexico that has observed and classified billions of objects based on automated photometric and spectroscopic methods. A sample of blazars is obtained from the third Fermi catalog. For variability analysis, we searched for light curves for these objects in Wide-Field Infrared Survey Explorer (WISE) and Near Earth Orbit WISE (NEOWISE) in two bands: W1 (3.4 microns) and W2 (4.6 microns), reducing the final sample to 256 objects. These objects are also classified into 155 BLLacs, 99 FSRQs, and 2 Narrow Line Seyferts, namely, PMNJ0948+0022 and PKS1502+036. Mid-infrared variability studies of these objects would be a contribution to the literature. With this as motivation, the present work is focused on studying a final sample of 256 objects in general and the Seyferts in particular. Owing to the fact that the classification is automated, SDSS has miclassified these objects into quasars, galaxies, and stars. Reasons for the misclassification are explained in this work. The variability analysis of these objects is done using the method of flux amplitude variability and excess variance. The sample consists of observations in both W1 and W2 bands. PMN J0948+0022 is observed between MJD from 57154.79 to 58810.57. PKS 1502+036 is observed between MJD from 57232.42 to 58517.11, which amounts to a period of over six years. The data is divided into different epochs spanning not more than 1.2 days. In all the epochs, the sources are found to be variable in both W1 and W2 bands. This confirms that the object is variable in mid-infrared wavebands in both long and short timescales. Also, the sources are observed for color variability. Objects either show a bluer when brighter trend (BWB) or a redder when brighter trend (RWB). The possible claim for the object to be BWB (present objects) is that the longer wavelength radiation emitted by the source can be suppressed by the high-energy radiation from the central source. Another result is that the smallest radius of the emission source is one day since the epoch span used in this work is one day. The mass of the black holes at the centers of these sources is found to be less than or equal to 108 solar masses, respectively.

Keywords: active galaxies, variability, Seyfert galaxies, SDSS, WISE

Procedia PDF Downloads 122