Search results for: step input
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4883

Search results for: step input

4613 Organic Carbon Pools Fractionation of Lacustrine Sediment with a Stepwise Chemical Procedure

Authors: Xiaoqing Liu, Kurt Friese, Karsten Rinke

Abstract:

Lacustrine sediment archives rich paleoenvironmental information in lake and surrounding environment. Additionally, modern sediment is used as an effective medium for the monitoring of lake. Organic carbon in sediment is a heterogeneous mixture with varying turnover times and qualities which result from the different biogeochemical processes in the deposition of organic material. Therefore, the isolation of different carbon pools is important for the research of lacustrine condition in the lake. However, the numeric available fractionation procedures can hardly yield homogeneous carbon pools on terms of stability and age. In this work, a multi-step fractionation protocol that treated sediment with hot water, HCl, H2O2 and Na2S2O8 in sequence was adopted, the treated sediment from each step were analyzed for the isotopic and structural compositions with Isotope Ratio Mass Spectrometer coupled with element analyzer (IRMS-EA) and Solid-state 13C Nuclear Magnetic Resonance (NMR), respectively. The sequential extractions with hot-water, HCl, and H2O2 yielded a more homogeneous and C3 plant-originating OC fraction, which was characterized with an atomic C/N ratio shift from 12.0 to 20.8, and 13C and 15N isotopic signatures were 0.9‰ and 1.9‰ more depleted than the original bulk sediment, respectively. Additionally, the H2O2- resistant residue was dominated with stable components, such as the lignins, waxes, cutans, tannins, steroids and aliphatic proteins and complex carbohydrates. 6M HCl in the acid hydrolysis step was much more effective than 1M HCl to isolate a sedimentary OC fraction with higher degree of homogeneity. Owing to the extremely high removal rate of organic matter, the step of a Na2S2O8 oxidation is only suggested if the isolation of the most refractory OC pool is mandatory. We conclude that this multi-step chemical fractionation procedure is effective to isolate more homogeneous OC pools in terms of stability and functional structure, and it can be used as a promising method for OC pools fractionation of sediment or soil in future lake research.

Keywords: 13C-CPMAS-NMR, 13C signature, lake sediment, OC fractionation

Procedia PDF Downloads 280
4612 Optimization of Gold Mining Parameters by Cyanidation

Authors: Della Saddam Housseyn

Abstract:

Gold, the quintessential noble metal, is one of the most popular metals today, given its ever-increasing cost in the international market. The Amesmessa gold deposit is one of the gold-producing deposits. The first step in our job is to analyze the ore (considered rich ore). Mineralogical and chemical analysis has shown that the general constitution of the ore is quartz in addition to other phases such as Al2O3, Fe2O3, CaO, dolomite. The second step consists of all the leaching tests carried out in rolling bottles. These tests were carried out on 14 samples to determine the maximum recovery rate and the optimum consumption of reagent (NaCN and CaO). Tests carried out on a pulp density at 50% solid, 500 ppm cyanide concentration and particle size less than 0.6 mm at alkaline pH gave a recovery rate of 94.37%.

Keywords: cyanide, DRX, FX, gold, leaching, rate of recovery, SAA

Procedia PDF Downloads 161
4611 Designing a Tool for Software Maintenance

Authors: Amir Ngah, Masita Abdul Jalil, Zailani Abdullah

Abstract:

The aim of software maintenance is to maintain the software system in accordance with advancement in software and hardware technology. One of the early works on software maintenance is to extract information at higher level of abstraction. In this paper, we present the process of how to design an information extraction tool for software maintenance. The tool can extract the basic information from old program such as about variables, based classes, derived classes, objects of classes, and functions. The tool have two main part; the lexical analyzer module that can read the input file character by character, and the searching module which is user can get the basic information from existing program. We implemented this tool for a patterned sub-C++ language as an input file.

Keywords: extraction tool, software maintenance, reverse engineering, C++

Procedia PDF Downloads 471
4610 Measurement of Susceptibility Users Using Email Phishing Attack

Authors: Cindy Sahera, Sarwono Sutikno

Abstract:

Rapid technological developments also have negative impacts, namely the increasing criminal cases based on technology or cybercrime. One technique that can be used to conduct cybercrime attacks are phishing email. The issue is whether the user is aware that email can be misused by others so that it can harm the user's own? This research was conducted to measure the susceptibility of selected targets against email abuse. The objectives of this research are measurement of targets’ susceptibility and find vulnerability in email recipient. There are three steps being taken in this research, (1) the information gathering phase, (2) the design phase, and (3) the execution phase. The first step includes the collection of the information necessary to carry out an attack on a target. The next step is to make the design of an attack against a target. The last step is to send phishing emails to the target. The levels of susceptibility are three: level 1, level 2 and level 3. Level 1 indicates a low level of targets’ susceptibility, level 2 indicates the intermediate level of targets’ susceptibility, and level 3 indicates a high level of targets’ susceptibility. The results showed that users who are on level 1 and level 2 more that level 3, which means the user is not too careless. However, it does not mean the user to be safe. There are still vulnerabilities that may occur, such as automatic location detection when opening emails and automatic downloaded malware as user clicks a link in the email.

Keywords: cybercrime, email phishing, susceptibility, vulnerability

Procedia PDF Downloads 259
4609 Hybrid Recovery of Copper and Silver from Photovoltaic Ribbon and Ag finger of End-Of-Life Solar Panels

Authors: T. Patcharawit, C. Kansomket, N. Wongnaree, W. Kritsrikan, T. Yingnakorn, S. Khumkoa

Abstract:

Recovery of pure copper and silver from end-of-life photovoltaic panels was investigated in this paper using an effective hybrid pyro-hydrometallurgical process. In the first step of waste treatment, solar panel waste was first dismantled to obtain a PV sheet to be cut and calcined at 500°C, to separate out PV ribbon from glass cullet, ash, and volatile while the silicon wafer containing silver finger was collected for recovery. In the second step of metal recovery, copper recovery from photovoltaic ribbon was via 1-3 M HCl leaching with SnCl₂ and H₂O₂ additions in order to remove the tin-lead coating on the ribbon. The leached copper band was cleaned and subsequently melted as an anode for the next step of electrorefining. Stainless steel was set as the cathode with CuSO₄ as an electrolyte, and at a potential of 0.2 V, high purity copper of 99.93% was obtained at 96.11% recovery after 24 hours. For silver recovery, the silicon wafer containing silver finger was leached using HNO₃ at 1-4 M in an ultrasonic bath. In the next step of precipitation, silver chloride was then obtained and subsequently reduced by sucrose and NaOH to give silver powder prior to oxy-acetylene melting to finally obtain pure silver metal. The integrated recycling process is considered to be economical, providing effective recovery of high purity metals such as copper and silver while other materials such as aluminum, copper wire, glass cullet can also be recovered to be reused commercially. Compounds such as PbCl₂ and SnO₂ obtained can also be recovered to enter the market.

Keywords: electrorefining, leaching, calcination, PV ribbon, silver finger, solar panel

Procedia PDF Downloads 114
4608 Maize Farmers’ Perception of Sharp Practices among Agro-Input Dealers in Ibadan/Ibarapa Agricultural Zone, Oyo State

Authors: Ademola A. Ladele, Peace I. Aburime

Abstract:

Fake and substandard agricultural inputs pose a serious stumbling block to farm productivity and subsequently improved livelihood. There is, therefore, a need to pave ways for sustainable agriculture and self-sufficiency in food production by proffering solutions to this challenge. Maize farmers' perception of sharp practices among agro-input dealers in Ibadan/Ibarapa agricultural zone in Oyo state was therefore investigated. A multi-stage random sampling technique was used to select registered maize farmers in the Ibadan/Ibarapa agricultural zone of the Oyo State Agricultural Development Programme (OYSADEP). A structured questionnaire was used to collect information on the perception of sharp practices and the effects of sharp practices. A total of seventy-five maize farmers were interviewed. A focus group discussion was organized to identify ways of curbing sharp practices to complement the survey. Data were analyzed using descriptive statistics, Chi-square, and Pearson Product Moment Correlation (PPMC). Forms of sharp practices indicated were sales of expired fertilizers, expired pesticides, expired herbicides, underweight fertilizers, adulterated fertilizers, adulterated herbicides, packs containing broken seeds, infested seeds, lack of truth in labeling/wrong labels, manipulation of measuring scales, and false declaration of hecterages covered by tractor operators. The majority had unfavorable perception of agro-input dealers on sharp practices. A significant relationship was observed between respondents’ level of education and their perception of sharp practices. There were no significant relationships between respondents’ sex, marital status and religion, and their perception of sharp practices. A significant correlation exists between the forms of sharp practices and the perceived effect on agricultural production. It is concluded that the perceived effect of sharp practices was critical and the endemic culture of sharp practices prevailed in agro-input in Ibadan/Ibarapa agricultural zone. A standard regulatory system that will certify and monitor the quality of inputs should be put in place.

Keywords: agricultural productivity, agro-input dealers, maize farmers, sharp practices

Procedia PDF Downloads 177
4607 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures

Authors: A. T. Al-Isawi, P. E. F. Collins

Abstract:

The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.

Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction

Procedia PDF Downloads 102
4606 Preparation of Catalyst-Doped TiO2 Nanotubes by Single Step Anodization and Potential Shock

Authors: Hyeonseok Yoo, Kiseok Oh, Jinsub Choi

Abstract:

Titanium oxide nanotubes have attracted great attention because of its photocatalytic activity and large surface area. For enhancing electrochemical properties, catalysts should be doped into the structure because titanium oxide nanotubes themselves have low electroconductivity and catalytic activity. It has been reported that Ru and Ir doped titanium oxide electrodes exhibit high efficiency and low overpotential in the oxygen evolution reaction (OER) for water splitting. In general, titanium oxide nanotubes with high aspect ratio cannot be easily doped by conventional complex methods. Herein, two types of facile routes, namely single step anodization and potential shock, for Ru doping into high aspect ratio titanium oxide nanotubes are introduced in detail. When single step anodization was carried out, stability of electrodes were increased. However, onset potential was shifted to anodic direction. On the other hand, when high potential shock voltage was applied, a large amount of ruthenium/ruthenium oxides were doped into titanium oxide nanotubes and thick barrier oxide layers were formed simultaneously. Regardless of doping routes, ruthenium/ ruthenium oxides were homogeneously doped into titanium oxide nanotubes. In spite of doping routes, doping in aqueous solution generally led to incorporate high amount of Ru in titanium oxide nanotubes, compared to that in non-aqueous solution. The amounts of doped catalyst were analyzed by X-ray photoelectron spectroscopy (XPS). The optimum condition for water splitting was investigated in terms of the amount of doped Ru and thickness of barrier oxide layer.

Keywords: doping, potential shock, single step anodization, titanium oxide nanotubes

Procedia PDF Downloads 433
4605 The Simple Two-Step Polydimethylsiloxane (PDMS) Transferring Process for High Aspect Ratio Microstructures

Authors: Shaoxi Wang, Pouya Rezai

Abstract:

High aspect ratio is the necessary parts of complex microstructures. Some methods available to achieve high aspect ratio requires expensive materials or complex process; others is difficult to research simple high aspect ratio structures. The paper presents a simple and cheap two-step Polydimethylsioxane (PDMS) transferring process to get high aspect ratio single pillars, which only requires covering the PDMS mold with Brij@52 surface solution. The experimental results demonstrate the method efficiency and effective.

Keywords: high aspect ratio, microstructure, PDMS, Brij

Procedia PDF Downloads 240
4604 Social Media Diffusion And Implications For Opinion Leadership In Northcentral Nigeria

Authors: Chuks Odiegwu-Enwerem

Abstract:

The classical notion of opinion leadership presupposes that the media is at the center of an effective and successful opinion leadership. Under this idea, an opinion leader is an active media user who consumes, understands, digests and interprets the messages for the understanding and acceptance/adoption by lower-end media users – whose access and understanding of media content are supposedly low. Because of their unique access to and presumed understanding of media functions and their content, opinion leaders are typically esteemed by those who look forward to and accept their opinions. Lazarsfeld and Katz’s two-step flow of communication theory is the basis of opinion leadership – propelled by limited access to the media. With the emergence and spread of social media and its unlimited access by all and sundry, however, the study interrogates the relevance and application of opinion leaders and, by implication, the two-step flow communication theory in Nigeria’s Northcentral region. It seeks to determine whether opinion leaders still exist in the picture and if they still exert considerable influence, especially in matters of political conversations and decision-making among the citizens of this area. It further explores whether the diffusion of social media is a reality and how the ‘low-end’ media users react to the new-found freedom of access to media, and how they are using it to inform their decisions on important matters as well as examines if they are still glued to their opinion leaders. This study explores the empirical dimensions of the two-step flow hypothesis in relation to the activities of social media to determine if a change has occurred and in what direction, using mixed methos of Survey and in-depth interviews. Our understanding and belief in some theoretical assumptions may be enhanced or challenged by the study outcome.

Keywords: Opinion Leadership, Active Media User, Two-Step-Flow, Social media, Northcentral Nigeria

Procedia PDF Downloads 49
4603 Characterization of Two Hybrid Welding Techniques on SA 516 Grade 70 Weldments

Authors: M. T. Z. Butt, T. Ahmad, N. A. Siddiqui

Abstract:

Commercially SA 516 Grade 70 is frequently used for the manufacturing of pressure vessels, boilers and storage tanks etc. in fabrication industry. Heat input is the major parameter during welding that may bring significant changes in the microstructure as well as the mechanical properties. Different welding technique has different heat input rate per unit surface area. Materials with large thickness are dealt with different combination of welding techniques to achieve required mechanical properties. In the present research two schemes: Scheme 1: SMAW (Shielded Metal Arc Welding) & GTAW (Gas Tungsten Arc Welding) and Scheme 2: SMAW & SAW (Submerged Arc Welding) of hybrid welding techniques have been studied. The purpose of these schemes was to study hybrid welding effect on the microstructure and mechanical properties of the weldment, heat affected zone and base metal area. It is significant to note that the thickness of base plate was 12 mm, also welding conditions and parameters were set according to ASME Section IX. It was observed that two different hybrid welding techniques performed on two different plates demonstrated that the mechanical properties of both schemes are more or less similar. It means that the heat input, welding techniques and varying welding operating conditions & temperatures did not make any detrimental effect on the mechanical properties. Hence, the hybrid welding techniques mentioned in the present study are favorable to implicate for the industry using the plate thickness around 12 mm thick.

Keywords: grade 70, GTAW, hybrid welding, SAW, SMAW

Procedia PDF Downloads 314
4602 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study

Authors: Faris Tarlochan, Siva Mahesh Tangutooru

Abstract:

Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies, each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 µm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.

Keywords: Lateral Geniculate Nucleus, visual cortex, finite element, glaucoma, neuroprostheses

Procedia PDF Downloads 258
4601 Major Incident Tier System in the Emergency Department: An Approach

Authors: Catherine Bernard, Paul Ransom

Abstract:

Recent events have prompted emergency planners to re-evaluate their emergency response to major incidents and mass casualties. At the Royal Sussex County Hospital, we have adopted a tiered system comprised of three levels, anticipating an increasing P1, P2 or P3 load. This will aid planning in the golden period between Major Incident ‘Standby,’ and ‘Declared’. Each tier offers step-by-step instructions on appropriate patient movement within and out of the department, as well as suggestions for overflow areas and additional staffing levels. This system can be adapted to individual hospitals and provides concise instructions to be followed in a potentially overwhelming situation.

Keywords: disaster planning, emergency preparedness, major incident planning, mass casualty event

Procedia PDF Downloads 353
4600 An Impairment of Spatiotemporal Gait Adaptation in Huntington's Disease when Navigating around Obstacles

Authors: Naznine Anwar, Kim Cornish, Izelle Labuschagne, Nellie Georgiou-Karistianis

Abstract:

Falls and subsequent injuries are common features in symptomatic Huntington’s disease (symp-HD) individuals. As part of daily walking, navigating around obstacles may incur a greater risk of falls in symp-HD. We designed obstacle-crossing experiment to examine adaptive gait dynamics and to identify underlying spatiotemporal gait characteristics that could increase the risk of falling in symp-HD. This experiment involved navigating around one or two ground-based obstacles under two conditions (walking while navigating around one obstacle, and walking while navigating around two obstacles). A total of 32 participants were included, 16 symp-HD and 16 healthy controls with age and sex matched. We used a GAITRite electronic walkway to examine the spatiotemporal gait characteristics and inter-trail gait variability when participants walked at their preferable speed. A minimum of six trials were completed which were performed for baseline free walk and also for each and every condition during navigating around the obstacles. For analysis, we separated all walking steps into three phases as approach steps, navigating steps and recovery steps. The mean and inter-trail variability (within participant standard deviation) for each step gait variable was calculated across the six trails. We found symp-HD individuals significantly decreased their gait velocity and step length and increased step duration variability during the navigating steps and recovery steps compared with approach steps. In contrast, HC individuals showed less difference in gait velocity, step time and step length variability from baseline in both respective conditions as well as all three approaches. These findings indicate that increasing spatiotemporal gait variability may be a possible compensatory strategy that is adopted by symp-HD individuals to effectively navigate obstacles during walking. Such findings may offer benefit to clinicians in the development of strategies for HD individuals to improve functional outcomes in the home and hospital based rehabilitation program.

Keywords: Huntington’s disease, gait variables, navigating around obstacle, basal ganglia dysfunction

Procedia PDF Downloads 424
4599 Towards Logical Inference for the Arabic Question-Answering

Authors: Wided Bakari, Patrice Bellot, Omar Trigui, Mahmoud Neji

Abstract:

This article constitutes an opening to think of the modeling and analysis of Arabic texts in the context of a question-answer system. It is a question of exceeding the traditional approaches focused on morphosyntactic approaches. Furthermore, we present a new approach that analyze a text in order to extract correct answers then transform it to logical predicates. In addition, we would like to represent different levels of information within a text to answer a question and choose an answer among several proposed. To do so, we transform both the question and the text into logical forms. Then, we try to recognize all entailment between them. The results of recognizing the entailment are a set of text sentences that can implicate the user’s question. Our work is now concentrated on an implementation step in order to develop a system of question-answering in Arabic using techniques to recognize textual implications. In this context, the extraction of text features (keywords, named entities, and relationships that link them) is actually considered the first step in our process of text modeling. The second one is the use of techniques of textual implication that relies on the notion of inference and logic representation to extract candidate answers. The last step is the extraction and selection of the desired answer.

Keywords: NLP, Arabic language, question-answering, recognition text entailment, logic forms

Procedia PDF Downloads 315
4598 Development of a Triangular Evaluation Protocol in a Multidisciplinary Design Process of an Ergometric Step

Authors: M. B. Ricardo De Oliveira, A. Borghi-Silva, E. Paravizo, F. Lizarelli, L. Di Thomazzo, D. Braatz

Abstract:

Prototypes are a critical feature in the product development process, as they help the project team visualize early concept flaws, communicate ideas and introduce an initial product testing. Involving stakeholders, such as consumers and users, in prototype tests allows the gathering of valuable feedback, contributing for a better product and making the design process more participatory. Even though recent studies have shown that user evaluation of prototypes is valuable, few articles provide a method or protocol on how designers should conduct it. This multidisciplinary study (involving the areas of physiotherapy, engineering and computer science) aims to develop an evaluation protocol, using an ergometric step prototype as the product prototype to be assessed. The protocol consisted of performing two tests (the 2 Minute Step Test and the Portability Test) to allow users (patients) and consumers (physiotherapists) to have an experience with the prototype. Furthermore, the protocol contained four Likert-Scale questionnaires (one for users and three for consumers), that inquired participants about how they perceived the design characteristics of the product (performance, safety, materials, maintenance, portability, usability and ergonomics), in their use of the prototype. Additionally, the protocol indicated the need to conduct interviews with the product designers, in order to link their feedback to the ones from the consumers and users. Both tests and interviews were recorded for further analysis. The participation criteria for the study was gender and age for patients, gender and experience with 2 Minute Step Test for physiotherapists and involvement level in the product development project for designers. The questionnaire's reliability was validated using Cronbach's Alpha and the quantitative data of the questionnaires were analyzed using non-parametric hypothesis tests with a significance level of 0.05 (p <0.05) and descriptive statistics. As a result, this study provides a concise evaluation protocol which can assist designers in their development process, collecting quantitative feedback from consumer and users, and qualitative feedback from designers.

Keywords: Product Design, Product Evaluation, Prototypes, Step

Procedia PDF Downloads 101
4597 Design and Implementation of a 94 GHz CMOS Double-Balanced Up-Conversion Mixer for 94 GHz Imaging Radar Sensors

Authors: Yo-Sheng Lin, Run-Chi Liu, Chien-Chu Ji, Chih-Chung Chen, Chien-Chin Wang

Abstract:

A W-band double-balanced mixer for direct up-conversion using standard 90 nm CMOS technology is reported. The mixer comprises an enhanced double-balanced Gilbert cell with PMOS negative resistance compensation for conversion gain (CG) enhancement and current injection for power consumption reduction and linearity improvement, a Marchand balun for converting the single LO input signal to differential signal, another Marchand balun for converting the differential RF output signal to single signal, and an output buffer amplifier for loading effect suppression, power consumption reduction and CG enhancement. The mixer consumes low power of 6.9 mW and achieves LO-port input reflection coefficient of -17.8~ -38.7 dB and RF-port input reflection coefficient of -16.8~ -27.9 dB for frequencies of 90~100 GHz. The mixer achieves maximum CG of 3.6 dB at 95 GHz, and CG of 2.1±1.5 dB for frequencies of 91.9~99.4 GHz. That is, the corresponding 3 dB CG bandwidth is 7.5 GHz. In addition, the mixer achieves LO-RF isolation of 36.8 dB at 94 GHz. To the authors’ knowledge, the CG, LO-RF isolation and power dissipation results are the best data ever reported for a 94 GHz CMOS/BiCMOS up-conversion mixer.

Keywords: CMOS, W-band, up-conversion mixer, conversion gain, negative resistance compensation, output buffer amplifier

Procedia PDF Downloads 513
4596 Induction Motor Analysis Using LabVIEW

Authors: E. Ramprasath, P. Manojkumar, P. Veena

Abstract:

Proposed paper dealt with the modelling and analysis of induction motor based on the mathematical expression using the graphical programming environment of Laboratory Virtual Instrument Engineering Workbench (LabVIEW). Induction motor modelling with the mathematical expression enables the motor to be simulated with the various required parameters. Owing to the invention of variable speed drives study about the induction motor characteristics became complex.In this simulation motor internal parameter such as stator resistance and reactance, rotor resistance and reactance, phase voltage, frequency and losses will be given as input. By varying the speed of motor corresponding parameters can be obtained they are input power, output power, efficiency, torque induced, slip and current.

Keywords: induction motor, LabVIEW software, modelling and analysi, electrical and mechanical characteristics of motor

Procedia PDF Downloads 538
4595 Analysis of 3 dB Directional Coupler Based On Silicon-On-Insulator (SOI) Large Cross-Section Rib Waveguide

Authors: Nurdiani Zamhari, Abang Annuar Ehsan

Abstract:

The 3 dB directional coupler is designed by using silicon-on-insulator (SOI) large cross-section and simulate by Beam Propagation Method at the communication wavelength of 1.55 µm and 1.48 µm. The geometry is shaped with rib height (H) of 6 µm and varied in step factor (r) which is 0.5, 0.6, 0.7 and 0.8. The wave guide spacing is also fixed to 5 µm and the slab width is symmetrical. In general, the 3 dB coupling lengths for four different cross-sections are several millimetre long. The 1.48 of wavelength give the longer coupling length if compare to 1.55 at the same step factor (r). Besides, the low loss propagation is achieved with less than 2 % of propagation loss.

Keywords: 3 dB directional couplers, silicon-on-insulator, symmetrical rib waveguide, OptiBPM 9

Procedia PDF Downloads 496
4594 Rural Livelihood under a Changing Climate Pattern in the Zio District of Togo, West Africa

Authors: Martial Amou

Abstract:

This study was carried out to assess the situation of households’ livelihood under a changing climate pattern in the Zio district of Togo, West Africa. The study examined three important aspects: (i) assessment of households’ livelihood situation under a changing climate pattern, (ii) farmers’ perception and understanding of local climate change, (iii) determinants of adaptation strategies undertaken in cropping pattern to climate change. To this end, secondary sources of data, and survey data collected from 235 farmers in four villages in the study area were used. Adapted conceptual framework from Sustainable Livelihood Framework of DFID, two steps Binary Logistic Regression Model and descriptive statistics were used in this study as methodological approaches. Based on Sustainable Livelihood Approach (SLA), various factors revolving around the livelihoods of the rural community were grouped into social, natural, physical, human, and financial capital. Thus, the study came up that households’ livelihood situation represented by the overall livelihood index in the study area (34%) is below the standard average households’ livelihood security index (50%). The natural capital was found as the poorest asset (13%) and this will severely affect the sustainability of livelihood in the long run. The result from descriptive statistics and the first step regression (selection model) indicated that most of the farmers in the study area have clear understanding of climate change even though they do not have any idea about greenhouse gases as the main cause behind the issue. From the second step regression (output model) result, education, farming experience, access to credit, access to extension services, cropland size, membership of a social group, distance to the nearest input market, were found to be the significant determinants of adaptation measures undertaken in cropping pattern by farmers in the study area. Based on the result of this study, recommendations are made to farmers, policy makers, institutions, and development service providers in order to better target interventions which build, promote or facilitate the adoption of adaptation measures with potential to build resilience to climate change and then improve rural livelihood.

Keywords: climate change, rural livelihood, cropping pattern, adaptation, Zio District

Procedia PDF Downloads 303
4593 Shared Decision-Making in Holistic Healthcare: Integrating Evidence-Based Medicine and Values-Based Medicine

Authors: Ling-Lang Huang

Abstract:

Research Background: Historically, the evolution of medicine has not only aimed to extend life but has also inadvertently introduced suffering in the process of maintaining life, presenting a contemporary challenge. We must carefully assess the conflict between the length of life and the quality of living. Evidence-Based Medicine (EBM) exists primarily to ensure the quality of cures. However, EBM alone does not fulfill our ultimate medical goals; we must also evaluate Value-Based Medicine (VBM) to find the best treatment for patients. Research Methodology: We can attempt to integrate EBM with VBM. Within the five steps of EBM, the first three steps (Ask—Acquire—Appraise) focus on the physical aspect of humans. However, in the fourth and fifth steps (Apply—Assess), the focus shifts from the physical to applying evidence-based treatment to the patient and assessing its effectiveness, considering a holistic approach to the individual. To consider VBM for patients, we can divide the process into three steps: The first step is "awareness," recognizing that each patient inhabits a different life-world and possesses unique differences. The second step is "integration," akin to the hermeneutic concept of the Fusion of Horizons. This means being aware of differences and also understanding the origins of these patient differences. The third step is "respect," which involves setting aside our adherence to medical objectivity and scientific rigor to respect the ultimate healthcare decisions made by individuals regarding their lives. Discussion and Conclusion: After completing these three steps of VBM, we can return to the fifth step of EBM: Assess. Our assessment can now transcend the physical treatment focus of the initial steps to align with a holistic care philosophy.

Keywords: shared decision-making, evidence-based medicine, values-based medicine, holistic healthcare

Procedia PDF Downloads 26
4592 Design of an Augmented Automatic Choosing Control with Constrained Input by Lyapunov Functions Using Gradient Optimization Automatic Choosing Functions

Authors: Toshinori Nawata

Abstract:

In this paper a nonlinear feedback control called augmented automatic choosing control (AACC) for a class of nonlinear systems with constrained input is presented. When designing the control, a constant term which arises from linearization of a given nonlinear system is treated as a coefficient of a stable zero dynamics. Parameters of the control are suboptimally selected by maximizing the stable region in the sense of Lyapunov with the aid of a genetic algorithm. This approach is applied to a field excitation control problem of power system to demonstrate the splendidness of the AACC. Simulation results show that the new controller can improve performance remarkably well.

Keywords: augmented automatic choosing control, nonlinear control, genetic algorithm, zero dynamics

Procedia PDF Downloads 459
4591 Digital Forgery Detection by Signal Noise Inconsistency

Authors: Bo Liu, Chi-Man Pun

Abstract:

A novel technique for digital forgery detection by signal noise inconsistency is proposed in this paper. The forged area spliced from the other picture contains some features which may be inconsistent with the rest part of the image. Noise pattern and the level is a possible factor to reveal such inconsistency. To detect such noise discrepancies, the test picture is initially segmented into small pieces. The noise pattern and level of each segment are then estimated by using various filters. The noise features constructed in this step are utilized in energy-based graph cut to expose forged area in the final step. Experimental results show that our method provides a good illustration of regions with noise inconsistency in various scenarios.

Keywords: forgery detection, splicing forgery, noise estimation, noise

Procedia PDF Downloads 434
4590 Evaluation of Gene Expression after in Vitro Differentiation of Human Bone Marrow-Derived Stem Cells to Insulin-Producing Cells

Authors: Mahmoud M. Zakaria, Omnia F. Elmoursi, Mahmoud M. Gabr, Camelia A. AbdelMalak, Mohamed A. Ghoneim

Abstract:

Many protocols were publicized for differentiation of human mesenchymal stem cells (MSCS) into insulin-producing cells (IPCs) in order to excrete insulin hormone ingoing to treat diabetes disease. Our aim is to evaluate relative gene expression for each independent protocol. Human bone marrow cells were derived from three volunteers that suffer diabetes disease. After expansion of mesenchymal stem cells, differentiation of these cells was done by three different protocols (the one-step protocol was used conophylline protein, the two steps protocol was depending on trichostatin-A, and the three-step protocol was started by beta-mercaptoethanol). Evaluation of gene expression was carried out by real-time PCR: Pancreatic endocrine genes, transcription factors, glucose transporter, precursor markers, pancreatic enzymes, proteolytic cleavage, extracellular matrix and cell surface protein. Quantitation of insulin secretion was detected by immunofluorescence technique in 24-well plate. Most of the genes studied were up-regulated in the in vitro differentiated cells, and also insulin production was observed in the three independent protocols. There were some slight increases in expression of endocrine mRNA of two-step protocol and its insulin production. So, the two-step protocol was showed a more efficient in expressing of pancreatic endocrine genes and its insulin production than the other two protocols.

Keywords: mesenchymal stem cells, insulin producing cells, conophylline protein, trichostatin-A, beta-mercaptoethanol, gene expression, immunofluorescence technique

Procedia PDF Downloads 193
4589 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 75
4588 Stereo Camera Based Speed-Hump Detection Process for Real Time Driving Assistance System in the Daytime

Authors: Hyun-Koo Kim, Yong-Hun Kim, Soo-Young Suk, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective speed hump detection process at the day-time. we focus only on round types of speed humps in the day-time dynamic road environment. The proposed speed hump detection scheme consists mainly of two process as stereo matching and speed hump detection process. Our proposed process focuses to speed hump detection process. Speed hump detection process consist of noise reduction step, data fusion step, and speed hemp detection step. The proposed system is tested on Intel Core CPU with 2.80 GHz and 4 GB RAM tested in the urban road environments. The frame rate of test videos is 30 frames per second and the size of each frame of grabbed image sequences is 1280 pixels by 670 pixels. Using object-marked sequences acquired with an on-vehicle camera, we recorded speed humps and non-speed humps samples. Result of the tests, our proposed method can be applied in real-time systems by computation time is 13 ms. For instance; our proposed method reaches 96.1 %.

Keywords: data fusion, round types speed hump, speed hump detection, surface filter

Procedia PDF Downloads 493
4587 Active Learning in Computer Exercises on Electronics

Authors: Zoja Raud, Valery Vodovozov

Abstract:

Modelling and simulation provide effective way to acquire engineering experience. An active approach to modelling and simulation proposed in the paper involves, beside the compulsory part directed by the traditional step-by-step instructions, the new optional part basing on the human’s habits to design thus stimulating the efforts towards success in active learning. Computer exercises as a part of engineering curriculum incorporate a set of effective activities. In addition to the knowledge acquired in theoretical training, the described educational arrangement helps to develop problem solutions, computation skills, and experimentation performance along with enhancement of practical experience and qualification.

Keywords: modelling, simulation, engineering education, electronics, active learning

Procedia PDF Downloads 372
4586 Adversarial Attacks and Defenses on Deep Neural Networks

Authors: Jonathan Sohn

Abstract:

Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.

Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning

Procedia PDF Downloads 171
4585 55 dB High Gain L-Band EDFA Utilizing Single Pump Source

Authors: M. H. Al-Mansoori, W. S. Al-Ghaithi, F. N. Hasoon

Abstract:

In this paper, we experimentally investigate the performance of an efficient high gain triple-pass L-band Erbium-Doped Fiber (EDF) amplifier structure with a single pump source. The amplifier gain and noise figure variation with EDF pump power, input signal power and wavelengths have been investigated. The generated backward Amplified Spontaneous Emission (ASE) noise of the first amplifier stage is suppressed by using a tunable band-pass filter. The amplifier achieves a signal gain of 55 dB with low noise figure of 3.8 dB at -50 dBm input signal power. The amplifier gain shows significant improvement of 12.8 dB compared to amplifier structure without ASE suppression.

Keywords: optical amplifiers, EDFA, L-band, optical networks

Procedia PDF Downloads 321
4584 The Artificial Intelligence Technologies Used in PhotoMath Application

Authors: Tala Toonsi, Marah Alagha, Lina Alnowaiser, Hala Rajab

Abstract:

This report is about the Photomath app, which is an AI application that uses image recognition technology, specifically optical character recognition (OCR) algorithms. The (OCR) algorithm translates the images into a mathematical equation, and the app automatically provides a step-by-step solution. The application supports decimals, basic arithmetic, fractions, linear equations, and multiple functions such as logarithms. Testing was conducted to examine the usage of this app, and results were collected by surveying ten participants. Later, the results were analyzed. This paper seeks to answer the question: To what level the artificial intelligence features are accurate and the speed of process in this app. It is hoped this study will inform about the efficiency of AI in Photomath to the users.

Keywords: photomath, image recognition, app, OCR, artificial intelligence, mathematical equations.

Procedia PDF Downloads 147