Search results for: wireless power transfer
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8994

Search results for: wireless power transfer

654 Failing Regeneration, Displacement, and Continued Consequences on Future Urban Planning Processes in Distressed Neighborhoods in Tehran

Authors: Razieh Rezabeigi Sani, Alireza Farahani, Mahdi Haghi

Abstract:

Displacement, local discontent, and forced exclusion have become prominent parts of urban regeneration activities in the Global North and South. This paper discusses the processes of massive displacement and neighborhood alteration as the consequences of a large-scale political/ideological placemaking project in central Tehran that transformed people's daily lives in surrounding neighborhoods. The conversion of Imam Hussein Square and connecting 17-Shahrivar Street to a pedestrian plaza in 2016 resulted in adjacent neighborhoods' physical, social, and economic degradation. The project has downgraded the economic and social characteristics of urban life in surrounding neighborhoods, commercialized residential land uses, displaced local people and businesses, and created unprecedented housing modes. This research has been conducted in two stages; first, after the project's implementation between 2017-2018, and second, when the street was reopened after local protests in 2021. In the first phase, 50+ on-site interviews were organized with planners, managers, and dwellers about the decision-making processes, design, and project implementation. We find that the project was based on the immediate political objectives and top-down power exertion of the local government in creating exclusive spaces (for religious ceremonies) without considering locals' knowledge, preferences, lifestyles, and everyday interactions. In the continued research in 2021, we utilized data gathered in facilitation activities and several meetings and interviews with local inhabitants and businesses to explore, design, and implement initiatives for bottom-up planning in these neighborhoods. The top-down and product-oriented (rather than process-oriented) planning, dependency on municipal financing rather than local partnerships, and lack of public participation proved to have continued effects on local participation. The paper concludes that urban regeneration projects must be based on the participation of different private/public actors, sustainable financial resources, and overall social and spatial analysis of the peripheral area before interventions.

Keywords: displacement, urban regeneration, distressed neighborhoods, ideological placemaking, Tehran

Procedia PDF Downloads 74
653 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 325
652 Simulation of Optimum Sculling Angle for Adaptive Rowing

Authors: Pornthep Rachnavy

Abstract:

The purpose of this paper is twofold. First, we believe that there are a significant relationship between sculling angle and sculling style among adaptive rowing. Second, we introduce a methodology used for adaptive rowing, namely simulation, to identify effectiveness of adaptive rowing. For our study we simulate the arms only single scull of adaptive rowing. The method for rowing fastest under the 1000 meter was investigated by study sculling angle using the simulation modeling. A simulation model of a rowing system was developed using the Matlab software package base on equations of motion consist of many variation for moving the boat such as oars length, blade velocity and sculling style. The boat speed, power and energy consumption on the system were compute. This simulation modeling can predict the force acting on the boat. The optimum sculling angle was performing by computer simulation for compute the solution. Input to the model are sculling style of each rower and sculling angle. Outputs of the model are boat velocity at 1000 meter. The present study suggests that the optimum sculling angle exist depends on sculling styles. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the first style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the second style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the third style is -51.57 and 28.65 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the fourth style is -45.84 and 34.38 degree. A theoretical simulation for rowing has been developed and presented. The results suggest that it may be advantageous for the rowers to select the sculling angles proper to sculling styles. The optimum sculling angles of the rower depends on the sculling styles made by each rower. The investigated of this paper can be concludes in three directions: 1;. There is the optimum sculling angle in arms only single scull of adaptive rowing. 2. The optimum sculling angles depend on the sculling styles. 3. Computer simulation of rowing can identify opportunities for improving rowing performance by utilizing the kinematic description of rowing. The freedom to explore alternatives in speed, thrust and timing with the computer simulation will provide the coach with a tool for systematic assessments of rowing technique In addition, the ability to use the computer to examine the very complex movements during rowing will help both the rower and the coach to conceptualize the components of movements that may have been previously unclear or even undefined.

Keywords: simulation, sculling, adaptive, rowing

Procedia PDF Downloads 436
651 Analyses of Defects in Flexible Silicon Photovoltaic Modules via Thermal Imaging and Electroluminescence

Authors: S. Maleczek, K. Drabczyk, L. Bogdan, A. Iwan

Abstract:

It is known that for industrial applications using solar panel constructed from silicon solar cells require high-efficiency performance. One of the main problems in solar panels is different mechanical and structural defects, causing the decrease of generated power. To analyse defects in solar cells, various techniques are used. However, the thermal imaging is fast and simple method for locating defects. The main goal of this work was to analyze defects in constructed flexible silicon photovoltaic modules via thermal imaging and electroluminescence method. This work is realized for the GEKON project (No. GEKON2/O4/268473/23/2016) sponsored by The National Centre for Research and Development and The National Fund for Environmental Protection and Water Management. Thermal behavior was observed using thermographic camera (VIGOcam v50, VIGO System S.A, Poland) using a DC conventional source. Electroluminescence was observed by Steinbeis Center Photovoltaics (Stuttgart, Germany) equipped with a camera, in which there is a Si-CCD, 16 Mpix detector Kodak KAF-16803type. The camera has a typical spectral response in the range 350 - 1100 nm with a maximum QE of 60 % at 550 nm. In our work commercial silicon solar cells with the size 156 × 156 mm were cut for nine parts (called single solar cells) and used to create photovoltaic modules with the size of 160 × 70 cm (containing about 80 single solar cells). Flexible silicon photovoltaic modules on polyamides or polyester fabric were constructed and investigated taking into consideration anomalies on the surface of modules. Thermal imaging provided evidence of visible voltage-activated conduction. In electro-luminescence images, two regions are noticeable: darker, where solar cell is inactive and brighter corresponding with correctly working photovoltaic cells. The electroluminescence method is non-destructive and gives greater resolution of images thereby allowing a more precise evaluation of microcracks of solar cell after lamination process. Our study showed good correlations between defects observed by thermal imaging and electroluminescence. Finally, we can conclude that the thermographic examination of large scale photovoltaic modules allows us the fast, simple and inexpensive localization of defects at the single solar cells and modules. Moreover, thermographic camera was also useful to detection electrical interconnection between single solar cells.

Keywords: electro-luminescence, flexible devices, silicon solar cells, thermal imaging

Procedia PDF Downloads 288
650 Academic Identities in Transition

Authors: Caroline Selai, Sushrut Jadhav

Abstract:

Background: University College London (UCL), the first secular university in England to admit students regardless of their religion and gender, has nearly 29,000 students of which approximately 30% are international students. The UCL Cultural Consultation Service (CCS) for staff and students is a unique service that provides assistance to staff and students experiencing challenges in their teaching, enabling, support work or studies which they believe may have a cultural component. The service provides one-to-one and group consultations, lectures, seminars, ‘grand rounds’, interactive workshops and bespoke interventions. Data: This paper presents a content analysis of CCS referrals over the last 36 months. We focus on the experience of international students, many of whom experience not only a challenge to their academic identity but also a profound challenge to their personal cultural identity. We also present 3 vignettes to illustrate how students interpret, accept, contest and resist changes in their cultural and academic identity. Discussion: This paper highlights (i) how students from collectivist cultures attempt to assimilate within an individualistic, highly competitive western university that is bound by its own institutional norms; (ii) problems in negotiating challenges at the interface of culture and gender (iii) the impact of culturally different hierarchies of power, discrimination and authority and (iv) the significance of earlier traumatic and kinship conflicts. Many international students’ social identities are shaped by their cultural and family scripts. A large number have been taught that their teachers are to be revered and their teachings unchallenged. This is at odds with quintessential goal of the western university to encourage healthy scepticism and hone students’ critical thinking skills. Conclusions: Pupil-teacher ‘cultural transference’ and shifts in cultural academic identities of students underscore critical aspects of developmental and learning challenges for students. Staff-student cultural conflict requires a broader, systemic analysis of students, staff and the wider organisation. Our findings challenge Eurocentric psychodynamic concepts such as the nature of parent-child relationship in Western Europe. We argue for a broader, more inclusive approach to develop both effective pedagogic skills in euro-american academic institutions and culturally- appropriate psychodynamic theory to underpin counselling international students.

Keywords: academic identity, cultural transference, cultural consultation in higher education, cultural formulation, cultural identity.

Procedia PDF Downloads 435
649 Effects of Lower and Upper Body Plyometric Training on Electrocardiogram Parameters of University Athletes

Authors: T. N. Uzor, C. O. Akosile, G. O. Emeahara

Abstract:

Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.

Keywords: concentric, eccentric, electrocardiogram, plyometric

Procedia PDF Downloads 121
648 An Innovation Decision Process View in an Adoption of Total Laboratory Automation

Authors: Chia-Jung Chen, Yu-Chi Hsu, June-Dong Lin, Kun-Chen Chan, Chieh-Tien Wang, Li-Ching Wu, Chung-Feng Liu

Abstract:

With fast advances in healthcare technology, various total laboratory automation (TLA) processes have been proposed. However, adopting TLA needs quite high funding. This study explores an early adoption experience by Taiwan’s large-scale hospital group, the Chimei Hospital Group (CMG), which owns three branch hospitals (Yongkang, Liouying and Chiali, in order by service scale), based on the five stages of Everett Rogers’ Diffusion Decision Process. 1.Knowledge stage: Over the years, two weaknesses exists in laboratory department of CMG: 1) only a few examination categories (e.g., sugar testing and HbA1c) can now be completed and reported within a day during an outpatient clinical visit; 2) the Yongkang Hospital laboratory space is dispersed across three buildings, resulting in duplicated investment in analysis instruments and inconvenient artificial specimen transportation. Thus, the senior management of the department raised a crucial question, was it time to process the redesign of the laboratory department? 2.Persuasion stage: At the end of 2013, Yongkang Hospital’s new building and restructuring project created a great opportunity for the redesign of the laboratory department. However, not all laboratory colleagues had the consensus for change. Thus, the top managers arranged a series of benchmark visits to stimulate colleagues into being aware of and accepting TLA. Later, the director of the department proposed a formal report to the top management of CMG with the results of the benchmark visits, preliminary feasibility analysis, potential benefits and so on. 3.Decision stage: This TLA suggestion was well-supported by the top management of CMG and, finally, they made a decision to carry out the project with an instrument-leasing strategy. After the announcement of a request for proposal and several vendor briefings, CMG confirmed their laboratory automation architecture and finally completed the contracts. At the same time, a cross-department project team was formed and the laboratory department assigned a section leader to the National Taiwan University Hospital for one month of relevant training. 4.Implementation stage: During the implementation, the project team called for regular meetings to review the results of the operations and to offer an immediate response to the adjustment. The main project tasks included: 1) completion of the preparatory work for beginning the automation procedures; 2) ensuring information security and privacy protection; 3) formulating automated examination process protocols; 4) evaluating the performance of new instruments and the instrument connectivity; 5)ensuring good integration with hospital information systems (HIS)/laboratory information systems (LIS); and 6) ensuring continued compliance with ISO 15189 certification. 5.Confirmation stage: In short, the core process changes include: 1) cancellation of signature seals on the specimen tubes; 2) transfer of daily examination reports to a data warehouse; 3) routine pre-admission blood drawing and formal inpatient morning blood drawing can be incorporated into an automatically-prepared tube mechanism. The study summarizes below the continuous improvement orientations: (1) Flexible reference range set-up for new instruments in LIS. (2) Restructure of the specimen category. (3) Continuous review and improvements to the examination process. (4) Whether installing the tube (specimen) delivery tracks need further evaluation.

Keywords: innovation decision process, total laboratory automation, health care

Procedia PDF Downloads 394
647 Cartography through Picasso’s Eyes

Authors: Desiree Di Marco

Abstract:

The aim of this work is to show through the lens of art first which kind of reality was the one represented through fascist maps, and second to study the impact of the fascist regime’s cartography (FRC) on observers eye’s. In this study, it is assumed that the FRC’s representation of reality was simplified, timeless, and even a-spatial because it underrates the concept of territoriality. Cubism and Picasso’s paintings will be used as counter-examples to mystify fascist cartography’s ideological assumptions. The difference between the gaze of an observer looking at the surface of a fascist map and the gaze of someone observing a Picasso painting is impressive. Because there is always something dark, hidden, behind and inside a map, the world of fascist maps was a world built starting from the observation of a “window” that distorted reality and trapped the eyes of the observers. Moving across the map, they seem as if they were hypnotized. Cartohypnosis is the state in which the observer finds himself enslaved by the attractive force of the map, which uses a sort of “magic” geography, a geography that, by means of symbolic language, never has as its primary objective the attempt to show us reality in its complexity, but that of performing for its audience. Magical geography and hypnotic cartography in fascism blended together, creating an almost mystical, magical relationship that demystified reality to reduce the world to a conquerable space. This reduction offered the observer the possibility of conceiving new dimensions: of the limit, of the boundary, elements with which the subject felt fully involved and in which the aesthetic force of the images demonstrated all its strength. But in the early 20th century, the combination of art and cartography gave rise to new possibilities. Cubism which, more than all the other artistic currents showed us how much the observation of reality from a single point of view falls within dangerous logic, is an example. Cubism was an artistic movement that brought about a profound transformation in pictorial culture. It was not only a revolution of pictorial space, but it was a revolution of our conception of pictorial space. Up until that time, men and women were more inclined to believe in the power of images and their representations. Cubist painters rebelled against this blindness by claiming that art must always offer an alternative. Indeed the contribution of this work is precisely to show how art can be able to provide alternatives to even the most horrible regimes and the most atrocious human misfortunes. It also enriches the field of cartography because it "reassures" it by showing how much good it can be for cartography if also for other disciplines come close. Only in this way researcher can increase the chances for the cartography of a greater diffusion at the academic level.

Keywords: cartography, Picasso, fascism, culture

Procedia PDF Downloads 34
646 Airborne CO₂ Lidar Measurements for Atmospheric Carbon and Transport: America (ACT-America) Project and Active Sensing of CO₂ Emissions over Nights, Days, and Seasons 2017-2018 Field Campaigns

Authors: Joel F. Campbell, Bing Lin, Michael Obland, Susan Kooi, Tai-Fang Fan, Byron Meadows, Edward Browell, Wayne Erxleben, Doug McGregor, Jeremy Dobler, Sandip Pal, Christopher O'Dell, Ken Davis

Abstract:

The Active Sensing of CO₂ Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center instrument funded by NASA’s Science Mission Directorate that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO₂ ) mixing ratios in support of the NASA ASCENDS mission. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. The ACES design demonstrates advanced technologies critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. The Atmospheric Carbon and Transport – America (ACT-America) is an Earth Venture Suborbital -2 (EVS-2) mission sponsored by the Earth Science Division of NASA’s Science Mission Directorate. A major objective is to enhance knowledge of the sources/sinks and transport of atmospheric CO₂ through the application of remote and in situ airborne measurements of CO₂ and other atmospheric properties on spatial and temporal scales. ACT-America consists of five campaigns to measure regional carbon and evaluate transport under various meteorological conditions in three regional areas of the Continental United States. Regional CO₂ distributions of the lower atmosphere were observed from the C-130 aircraft by the Harris Corp. Multi-Frequency Fiber Laser Lidar (MFLL) and the ACES lidar. The airborne lidars provide unique data that complement the more traditional in situ sensors. This presentation shows the applications of CO₂ lidars in support of these science needs.

Keywords: CO₂ measurement, IMCW, CW lidar, laser spectroscopy

Procedia PDF Downloads 132
645 An ICF Framework for Game-Based Experiences in Geriatric Care

Authors: Marlene Rosa, Susana Lopes

Abstract:

Board games have been used for different purposes in geriatric care, demonstrating good results for health in general. However, there is not a conceptual framework that can help professionals and researchers in this area to design intervention programs or to think about future studies in this area. The aim of this study was to provide a pilot collection of board games’ serious purposes in geriatric care, using a WHO framework for health and disability. Study cases were developed in seven geriatric residential institutions from the center region in Portugal that are included in AGILAB program. The AGILAB program is a serious game-based method to train and spread out the implementation of board games in geriatric care. Each institution provides 2-hours/week of experiences using TATI Hand Game for serious purposes and then fulfill questions about a study-case (player characteristics; explain changes in players health according to this game experience). Two independent researchers read the information and classified it according to the International Classification for Functioning and Disability (ICF) categories. Any discrepancy was solved in a consensus meeting. Results indicate an important variability in body functions and structures: specific mental functions (e.g., b140 Attention functions, b144 Memory functions), b156 Perceptual functions, b2 sensory functions and pain (e.g., b230 Hearing functions; b265 Touch function; b280 Sensation of pain), b7 neuromusculoskeletal and movement-related functions (e.g., b730 Muscle power functions; b760 Control of voluntary movement functions; b710 Mobility of joint functions). Less variability was found in activities and participation domains, such as purposeful sensory experiences (d110-d129) (e.g., d115 Listening), communication (d3), d710 basic interpersonal interactions, d920 recreation and leisure (d9200 Play; d9205 Socializing). Concluding, this framework designed from a brief gamed-based experience includes mental, perceptual, sensory, neuromusculoskeletal, and movement-related functions and participation in sensory, communication, and leisure domains. More studies, including different experiences and a high number of users, should be developed to provide a more comprehensive ICF framework for game-based experiences in geriatric care.

Keywords: board game, aging, framework, experience

Procedia PDF Downloads 100
644 Long Time Oxidation Behavior of Machined 316 Austenitic Stainless Steel in Primary Water Reactor

Authors: Siyang Wang, Yujin Hu, Xuelin Wang, Wenqian Zhang

Abstract:

Austenitic stainless steels are widely used in nuclear industry to manufacture critical components owing to their excellent corrosion resistance at high temperatures. Almost all the components used in nuclear power plants are produced by surface finishing (surface cold work) such as milling, grinding and so on. The change of surface states induced by machining has great influence on the corrosion behavior. In the present study, long time oxidation behavior of machined 316 austenitic stainless steel exposed to simulated pressure water reactor environment was investigated considering different surface states. Four surface finishes were produced by electro-polishing (P), grinding (G), and two milling (M and M1) processes respectively. Before oxidation, the surface Vickers micro-hardness, surface roughness of each type of sample was measured. Corrosion behavior of four types of sample was studied by using oxidation weight gain method for six oxidation periods. The oxidation time of each period was 120h, 216h, 336h, 504h, 672h and 1344h, respectively. SEM was used to observe the surface morphology of oxide film in several period. The results showed that oxide film on austenitic stainless steel has a duplex-layer structure. The inner oxide film is continuous and compact, while the outer layer is composed of oxide particles. The oxide particle consisted of large particles (nearly micron size) and small particles (dozens of nanometers to a few hundred nanometers). The formation of oxide particle could be significantly affected by the machined surface states. The large particle on cold worked samples (grinding and milling) appeared earlier than electro-polished one, and the milled sample has the largest particle size followed by ground one and electro-polished one. For machined samples, the large particles were almost distributed along the direction of machining marks. Severe exfoliation was observed on one milled surface (M) which had the most heavily cold worked layer, while rare local exfoliation occurred on the ground sample (G) and the other milled sample (M1). The electro-polished sample (P) entirely did not exfoliate.

Keywords: austenitic stainless steel, oxidation, machining, SEM

Procedia PDF Downloads 262
643 Impact of Instrument Transformer Secondary Connections on Performance of Protection System: Experiences from Indian POWERGRID

Authors: Pankaj Kumar Jha, Mahendra Singh Hada, Brijendra Singh, Sandeep Yadav

Abstract:

Protective relays are commonly connected to the secondary windings of instrument transformers, i.e., current transformers (CTs) and/or capacitive voltage transformers (CVTs). The purpose of CT and CVT is to provide galvanic isolation from high voltages and reduce primary currents and voltages to a nominal quantity recognized by the protective relays. Selecting the correct instrument transformers for an application is imperative: failing to do so may compromise the relay’s performance, as the output of the instrument transformer may no longer be an accurately scaled representation of the primary quantity. Having an accurately rated instrument transformer is of no use if these devices are not properly connected. The performance of the protective relay is reliant on its programmed settings and on the current and voltage inputs from the instrument transformers secondary. This paper will help in understanding the fundamental concepts of the connections of Instrument Transformers to the protection relays and the effect of incorrect connection on the performance of protective relays. Multiple case studies of protection system mal-operations due to incorrect connections of instrument transformers will be discussed in detail in this paper. Apart from the connection issue of instrument transformers to protective relays, this paper will also discuss the effect of multiple earthing of CTs and CVTs secondary on the performance of the protection system. Case studies presented in this paper will help the readers to analyse the problem through real-world challenges in complex power system networks. This paper will also help the protection engineer in better analysis of disturbance records. CT and CVT connection errors can lead to undesired operations of protection systems. However, many of these operations can be avoided by adhering to industry standards and implementing tried-and-true field testing and commissioning practices. Understanding the effect of missing neutral of CVT, multiple earthing of CVT secondary, and multiple grounding of CT star points on the performance of the protection system through real-world case studies will help the protection engineer in better commissioning the protection system and maintenance of the protection system.

Keywords: bus reactor, current transformer, capacitive voltage transformer, distance protection, differential protection, directional earth fault, disturbance report, instrument transformer, ICT, REF protection, shunt reactor, voltage selection relay, VT fuse failure

Procedia PDF Downloads 48
642 Literary Theatre and Embodied Theatre: A Practice-Based Research in Exploring the Authorship of a Performance

Authors: Rahul Bishnoi

Abstract:

Theatre, as Ann Ubersfld calls it, is a paradox. At once, it is both a literary work and a physical representation. Theatre as a text is eternal, reproducible, and identical while as a performance, theatre is momentary and never identical to the previous performances. In this dual existence of theatre, who is the author? Is the author the playwright who writes the dramatic text, or the director who orchestrates the performance, or the actor who embodies the text? From the poststructuralist lens of Barthes, the author is dead. Barthes’ argument of discrete temporality, i.e. the author is the before, and the text is the after, does not hold true for theatre. A published literary work is written, edited, printed, distributed and then gets consumed by the reader. On the other hand, theatrical production is immediate; an actor performs and the audience witnesses it instantaneously. Time, so to speak, does not separate the author, the text, and the reader anymore. The question of authorship gets further complicated in Augusto Boal’s “Theatre of the Oppressed” movement where the audience is a direct participant like the actors in the performance. In this research, through an experimental performance, the duality of theatre is explored with the authorship discourse. And the conventional definition of authorship is subjected to additional complexity by erasing the distinction between an actor and the audience. The design/methodology of the experimental performance is as follows: The audience will be asked to produce a text under an anonymous virtual alias. The text, as it is being produced, will be read and performed by the actor. The audience who are also collectively “authoring” the text, will watch this performance and write further until everyone has contributed with one input each. The cycle of writing, reading, performing, witnessing, and writing will continue until the end. The intention is to create a dynamic system of writing/reading with the embodiment of the text through the actor. The actor is giving up the power to the audience to write the spoken word, stage instruction and direction while still keeping the agency of interpreting that input and performing in the chosen manner. This rapid conversation between the actor and the audience also creates a conversion of authorship. The main conclusion of this study is a perspective on the nature of dynamic authorship of theatre containing a critical enquiry of the collaboratively produced text, an individually performed act, and a collectively witnessed event. Using practice as a methodology, this paper contests the poststructuralist notion of the author as merely a ‘scriptor’ and breaks it further by involving the audience in the authorship as well.

Keywords: practice based research, performance studies, post-humanism, Avant-garde art, theatre

Procedia PDF Downloads 70
641 Influence of Wind Induced Fatigue Damage in the Reliability of Wind Turbines

Authors: Emilio A. Berny-Brandt, Sonia E. Ruiz

Abstract:

Steel tubular towers serving as support structures for large wind turbines are subject to several hundred million stress cycles arising from the turbulent nature of the wind. This causes high-cycle fatigue which can govern tower design. The practice of maintaining the support structure after wind turbines reach its typical 20-year design life have become common, but without quantifying the changes in the reliability on the tower. There are several studies on this topic, but most of them are based on the S-N curve approach using the Miner’s rule damage summation method, the de-facto standard in the wind industry. However, the qualitative nature of Miner’s method makes desirable the use of fracture mechanics to measure the effects of fatigue in the capacity curve of the structure, which is important in order to evaluate the integrity and reliability of these towers. Temporal and spatially varying wind speed time histories are simulated based on power spectral density and coherence functions. Simulations are then applied to a SAP2000 finite element model and step-by-step analysis is used to obtain the stress time histories for a range of representative wind speeds expected during service conditions of the wind turbine. Rainflow method is then used to obtain cycle and stress range information of each of these time histories and a statistical analysis is performed to obtain the distribution parameters of each variable. Monte Carlo simulation is used here to evaluate crack growth over time in the tower base using the Paris-Erdogan equation. A nonlinear static pushover analysis to assess the capacity curve of the structure after a number of years is performed. The capacity curves are then used to evaluate the changes in reliability of a steel tower located in Oaxaca, Mexico, where wind energy facilities are expected to grow in the near future. Results show that fatigue on the tower base can have significant effects on the structural capacity of the wind turbine, especially after the 20-year design life when the crack growth curve starts behaving exponentially.

Keywords: crack growth, fatigue, Monte Carlo simulation, structural reliability, wind turbines

Procedia PDF Downloads 493
640 Ubuntombi (Virginity) Among the Zulus: An Exploration of a Cultural Identity and Difference from a Postcolonial Feminist Perspective

Authors: Goodness Thandi Ntuli

Abstract:

The cultural practice of ubuntombi (virginity) among the Zulus is not easily understood from the outside of its cultural context. The empirical study that was conducted through the interviews and focus group discussions about the retrieval of ubuntombi as a cultural practice within the Zulu cultural community indicated that there is a particular cultural identity and difference that can be unearthed from this cultural practice. Being explored from the postcolonial feminist perspective, this cultural identity and difference is discerned in the way in which a Zulu young woman known as intombi (virgin) exercises her power and authority over her own sexuality. Taking full control of her own sexuality from the cultural viewpoint enables her not only to exercise her uniqueness in the midst of multiculturalism and pluralism but also to assert her cultural identity of being intombi. The assertion of the Zulu young woman’s cultural identity does not only empower her to stand on her life principles but also empowers her to lift herself up from the margins of the patriarchal society that otherwise would have kept her on the periphery. She views this as an opportunity for self-development and enhancement through educational opportunities that will enable her to secure a future with financial independence. The underlying belief is that once she has been educationally successful, she would secure a better job opportunity that will enable her to be self-sufficient and not to rely on any male provision for her sustenance. In this, she stands better chances of not being victimized by social patriarchal influences that generally keep women at the bottom of the socio-economic and political ladder. Consequently, ubuntombi (virginity) as a Zulu heritage and cultural identity becomes instrumental in the empowerment of the young women who choose this cultural practice as their adopted lifestyle. In addition, it is the kind of self-empowerment with the intrinsic motivation that works with the innate ability to resist any distraction from an individual’s set goals. It is thus concluded that this kind of motivation is a rare characteristic of the achievers in life. Once these young women adhere to their specified life principles, nothing can stop them from achieving the dreams of their hearts. This includes socio-economic autonomy that will ensure their liberation and emancipation as women in the midst of social and patriarchal challenges that militate against them in the hostile communities of their residence. Another hidden achievement would be to turn around the perception of being viewed as the “other”; instead, they will have to be viewed differently. Their difference lies in the turning around of the archaic kind of cultural practice into a modern tool of self-development and enhancement in contemporary society.

Keywords: cultural, difference, identity, postcolonial, ubuntombi, zulus

Procedia PDF Downloads 153
639 Detailed Sensitive Detection of Impurities in Waste Engine Oils Using Laser Induced Breakdown Spectroscopy, Rotating Disk Electrode Optical Emission Spectroscopy and Surface Plasmon Resonance

Authors: Cherry Dhiman, Ayushi Paliwal, Mohd. Shahid Khan, M. N. Reddy, Vinay Gupta, Monika Tomar

Abstract:

The laser based high resolution spectroscopic experimental techniques such as Laser Induced Breakdown Spectroscopy (LIBS), Rotating Disk Electrode Optical Emission spectroscopy (RDE-OES) and Surface Plasmon Resonance (SPR) have been used for the study of composition and degradation analysis of used engine oils. Engine oils are mainly composed of aliphatic and aromatics compounds and its soot contains hazardous components in the form of fine, coarse and ultrafine particles consisting of wear metal elements. Such coarse particulates matter (PM) and toxic elements are extremely dangerous for human health that can cause respiratory and genetic disorder in humans. The combustible soot from thermal power plants, industry, aircrafts, ships and vehicles can lead to the environmental and climate destabilization. It contributes towards global pollution for land, water, air and global warming for environment. The detection of such toxicants in the form of elemental analysis is a very serious issue for the waste material management of various organic, inorganic hydrocarbons and radioactive waste elements. In view of such important points, the current study on used engine oils was performed. The fundamental characterization of engine oils was conducted by measuring water content and kinematic viscosity test that proves the crude analysis of the degradation of used engine oils samples. The microscopic quantitative and qualitative analysis was presented by RDE-OES technique which confirms the presence of elemental impurities of Pb, Al, Cu, Si, Fe, Cr, Na and Ba lines for used waste engine oil samples in few ppm. The presence of such elemental impurities was confirmed by LIBS spectral analysis at various transition levels of atomic line. The recorded transition line of Pb confirms the maximum degradation which was found in used engine oil sample no. 3 and 4. Apart from the basic tests, the calculations for dielectric constants and refractive index of the engine oils were performed via SPR analysis.

Keywords: surface plasmon resonance, laser-induced breakdown spectroscopy, ICCD spectrometer, engine oil

Procedia PDF Downloads 115
638 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential

Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen

Abstract:

Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.

Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance

Procedia PDF Downloads 360
637 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: fault detection, ground robot, inverse simulation, rover

Procedia PDF Downloads 277
636 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 179
635 Towards a Doughnut Economy: The Role of Institutional Failure

Authors: Ghada El-Husseiny, Dina Yousri, Christian Richter

Abstract:

Social services are often characterized by market failures, which justifies government intervention in the provision of these services. It is widely acknowledged that government intervention breeds corruption since resources are being transferred from one party to another. However, what is still being extensively studied is the magnitude of the negative impact of corruption on publicly provided services and development outcomes. Corruption has the power to hinder development and cripple our march towards the Sustainable Development Goals. Corruption diminishes the efficiency and effectiveness of public health and education spending and directly impacts the outcomes of these sectors. This paper empirically examines the impact of Institutional Failure on public sector services provision, with the sole purpose of studying the impact of corruption on SDG3 and 4; Good health and wellbeing and Quality education, respectively. The paper explores the effect of corruption on these goals from various perspectives and extends the analysis by examining if the impact of corruption on these goals differed when it accounted for the current corruption state. Using Pooled OLS(Ordinary Least Square) and Fixed effects panel estimation on 22 corrupt and 22 clean countries between 2000 and 2017. Results show that corruption in both corrupt and clean countries has a more severe impact on Health than the Education sector. In almost all specifications, corruption has an insignificant effect on School Enrollment rates but a significant effect on Infant Mortality rates. Results further indicate that, on average, a 1 point increase in the CPI(Consumer Price Index) can increase health expenditures by 0.116% in corrupt and clean countries. However, the fixed effects model indicates that the way Health and Education expenditures are determined in clean and corrupt countries are completely country-specific, in which corruption plays a minimal role. Moreover, the findings show that School Enrollment rates and Infant Mortality rates depend, to a large extent, on public spending. The most astounding results-driven is that corrupt countries, on average, have more effective and efficient healthcare expenditures. While some insights are provided as to why these results prevail, they should be further researched. All in all, corruption impedes development outcomes, and any Anti-corrupt policies taken will bring forth immense improvements and speed up the march towards sustainability.

Keywords: corruption, education, health, public spending, sustainable development

Procedia PDF Downloads 140
634 Laminar Periodic Vortex Shedding over a Square Cylinder in Pseudoplastic Fluid Flow

Authors: Shubham Kumar, Chaitanya Goswami, Sudipto Sarkar

Abstract:

Pseudoplastic (n < 1, n being the power index) fluid flow can be found in food, pharmaceutical and process industries and has very complex flow nature. To our knowledge, inadequate research work has been done in this kind of flow even at very low Reynolds numbers. Here, in the present computation, we have considered unsteady laminar flow over a square cylinder in pseudoplastic flow environment. For Newtonian fluid flow, this laminar vortex shedding range lies between Re = 47-180. In this problem, we consider Re = 100 (Re = U∞ a/ ν, U∞ is the free stream velocity of the flow, a is the side of the cylinder and ν is the kinematic viscosity of the fluid). The pseudoplastic fluid range has been chosen from close to the Newtonian fluid (n = 0.8) to very high pseudoplasticity (n = 0.1). The flow domain is constituted using Gambit 2.2.30 and this software is also used to generate mesh and to impose the boundary conditions. For all places, the domain size is considered as 36a × 16a with 280 ×192 grid point in the streamwise and flow normal directions respectively. The domain and the grid points are selected after a thorough grid independent study at n = 1.0. Fine and equal grid spacing is used close to the square cylinder to capture the upper and lower shear layers shed from the cylinder. Away from the cylinder the grid is unequal in size and stretched out in all direction. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition du/dy = 0, v = 0) at upper and lower domain boundary conditions are used for this simulation. Wall boundary (u = v = 0) is considered on the square cylinder surface. Fully conservative 2-D unsteady Navier-Stokes equations are discretized and then solved by Ansys Fluent 14.5 to understand the flow nature. SIMPLE algorithm written in finite volume method is selected for this purpose which is the default solver in scripted in Fluent. The result obtained for Newtonian fluid flow agrees well with previous work supporting Fluent’s usefulness in academic research. A minute analysis of instantaneous and time averaged flow field is obtained both for Newtonian and pseudoplastic fluid flow. It has been observed that drag coefficient increases continuously with the reduced value of n. Also, the vortex shedding phenomenon changes at n = 0.4 due to flow instability. These are some of the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: Ansys Fluent, CFD, periodic vortex shedding, pseudoplastic fluid flow

Procedia PDF Downloads 150
633 Synthesis and Characterization of LiCoO2 Cathode Material by Sol-Gel Method

Authors: Nur Azilina Abdul Aziz, Tuti Katrina Abdullah, Ahmad Azmin Mohamad

Abstract:

Lithium-transition metals and some of their oxides, such as LiCoO2, LiMn2O2, LiFePO4, and LiNiO2 have been used as cathode materials in high performance lithium-ion rechargeable batteries. Among the cathode materials, LiCoO2 has potential to been widely used as a lithium-ion battery because of its layered crystalline structure, good capacity, high cell voltage, high specific energy density, high power rate, low self-discharge, and excellent cycle life. This cathode material has been widely used in commercial lithium-ion batteries due to its low irreversible capacity loss and good cycling performance. However, there are several problems that interfere with the production of material that has good electrochemical properties, including the crystallinity, the average particle size and particle size distribution. In recent years, synthesis of nanoparticles has been intensively investigated. Powders prepared by the traditional solid-state reaction have a large particle size and broad size distribution. On the other hand, solution method can reduce the particle size to nanometer range and control the particle size distribution. In this study, LiCoO2 was synthesized using the sol–gel preparation method, which Lithium acetate and Cobalt acetate were used as reactants. The stoichiometric amounts of the reactants were dissolved in deionized water. The solutions were stirred for 30 hours using magnetic stirrer, followed by heating at 80°C under vigorous stirring until a viscous gel was formed. The as-formed gel was calcined at 700°C for 7 h under a room atmosphere. The structural and morphological analysis of LiCoO2 was characterized using X-ray diffraction and Scanning electron microscopy. The diffraction pattern of material can be indexed based on the α-NaFeO2 structure. The clear splitting of the hexagonal doublet of (006)/(102) and (108)/(110) in this patterns indicates materials are formed in a well-ordered hexagonal structure. No impurity phase can be seen in this range probably due to the homogeneous mixing of the cations in the precursor. Furthermore, SEM micrograph of the LiCoO2 shows the particle size distribution is almost uniform while particle size is between 0.3-0.5 microns. In conclusion, LiCoO2 powder was successfully synthesized using the sol–gel method. LiCoO2 showed a hexagonal crystal structure. The sample has been prepared clearly indicate the pure phase of LiCoO2. Meanwhile, the morphology of the sample showed that the particle size and size distribution of particles is almost uniform.

Keywords: cathode material, LiCoO2, lithium-ion rechargeable batteries, Sol-Gel method

Procedia PDF Downloads 331
632 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 114
631 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System

Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue

Abstract:

The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.

Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio

Procedia PDF Downloads 74
630 For a Poetic Clinic: Experimentations at Risk on the Images in Performances

Authors: Juliana Bom-Tempo

Abstract:

The proposed composition occurs between images, performances, clinics and philosophies. For this enterprise we depart for what is not known beforehand, so with a question as a compass: "would it be in the creation, production and implementation of images in a performance a 'when' for the event of a poetic clinic?” In light of this, there are, in order to think a 'when' of the event of a poetic clinic, images in performances created, produced and executed in partnerships with the author of this text. Faced with this composition, we built four indicators to find spatiotemporal coordinates that would spot that "when", namely: risk zones; the mobilizations of the signs; the figuring of the flesh and an education of the affections. We dealt with the images in performances; Crútero; Flesh; Karyogamy and the risk of abortion; Egg white; Egg-mouth; Islands, threads, words ... germs; Egg-Mouth-Debris, taken as case studies, by engendering risks areas to promote individuations, which never actualize thoroughly, thus always something of pre-individual and also individuating a environment; by mobilizing the signs territorialized by the ordinary, causing them to vary the language and the words of order dictated by the everyday in other compositions of sense, other machinations; by generating a figure of flesh, disarranging the bodies, isolating them in the production of a ground force that causes the body to leak out and undo the functionalities of the organs; and, finally, by producing an education of affections, by placing the perceptions in becoming and disconnecting the visible in the production of small deserts that call for the creation of a people yet to come. The performance is processed as a problematizing of the images fixed by the ordinary, producing gestures that precipitate the individuation of images in performance, strange to the configurations that gather bodies and spaces in what we call common. Lawrence proposes to think of "people" who continually use umbrellas to protect themselves from chaos. These have the function of wrapping up the chaos in visions that create houses, forms and stabilities; they paint a sky at the bottom of the umbrella, where people march and die. A chaos, where people live and wither. Pierce the umbrella for a desire of chaos; a poet puts himself as an enemy of the convention, to be able to have an image of chaos and a little sun that burns his skin. The images in performances presented, thereby, were moving in search for the power of producing a spatio-temporal "when" putting the territories in risk areas, mobilizing the signs that format the day-to-day, opening the bodies to a disorganization and the production of an education of affections for the event of a poetic clinic.

Keywords: Experimentations , Images in Performances, Poetic Clinic, Risk

Procedia PDF Downloads 83
629 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 215
628 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 346
627 Choice Analysis of Ground Access to São Paulo/Guarulhos International Airport Using Adaptive Choice-Based Conjoint Analysis (ACBC)

Authors: Carolina Silva Ansélmo

Abstract:

Airports are demand-generating poles that affect the flow of traffic around them. The airport access system must be fast, convenient, and adequately planned, considering its potential users. An airport with good ground access conditions can provide the user with a more satisfactory access experience. When several transport options are available, service providers must understand users' preferences and the expected quality of service. The present study focuses on airport access in a comparative scenario between bus, private vehicle, subway, taxi and urban mobility transport applications to São Paulo/Guarulhos International Airport. The objectives are (i) to identify the factors that influence the choice, (ii) to measure Willingness to Pay (WTP), and (iii) to estimate the market share for each modal. The applied method was Adaptive Choice-based Conjoint Analysis (ACBC) technique using Sawtooth Software. Conjoint analysis, rooted in Utility Theory, is a survey technique that quantifies the customer's perceived utility when choosing alternatives. Assessing user preferences provides insights into their priorities for product or service attributes. An additional advantage of conjoint analysis is its requirement for a smaller sample size compared to other methods. Furthermore, ACBC provides valuable insights into consumers' preferences, willingness to pay, and market dynamics, aiding strategic decision-making to provide a better customer experience, pricing, and market segmentation. In the present research, the ACBC questionnaire had the following variables: (i) access time to the boarding point, (ii) comfort in the vehicle, (iii) number of travelers together, (iv) price, (v) supply power, and (vi) type of vehicle. The case study questionnaire reached 213 valid responses considering the scenario of access from the São Paulo city center to São Paulo/Guarulhos International Airport. As a result, the price and the number of travelers are the most relevant attributes for the sample when choosing airport access. The market share of the selection is mainly urban mobility transport applications, followed by buses, private vehicles, taxis and subways.

Keywords: adaptive choice-based conjoint analysis, ground access to airport, market share, willingness to pay

Procedia PDF Downloads 47
626 Policy Implications of Cashless Banking on Nigeria’s Economy

Authors: Oluwabiyi Adeola Ayodele

Abstract:

This study analysed the Policy and general issues that have arisen over time in Nigeria’ Cashless banking environment as a result of the lack of a Legal framework on Electronic banking in Nigeria. It undertook an in-depth study of the cashless banking system. It discussed the evolution, growth and development of cashless banking in Nigeria; It revealed the expected benefits of the cashless banking system; It appraised regulatory issues and other prevalent problems on cashless banking in Nigeria; and made appropriate recommendations where necessary. The study relied on primary and secondary sources of information. The primary sources included the Constitution of the Federal Republic of Nigeria, Statutes, Conventions and Judicial decisions, while the secondary sources included Books, Journals Articles, Newspapers and Internet Materials. The study revealed that cashless banking has been adopted in Nigeria but still at the developing stage. It revealed that there is no law for the regulation of cashless banking in Nigeria, what Nigeria relies on for regulation is the Central Bank of Nigeria’s Cashless Policy, 2014. The Banks and Other Financial Institutions Act Chapter B3, LFN, 2004 of Nigeria lack provision to accommodate issues on Internet banking. However, under the general principles of legality in criminal law, and by the provisions of the Nigerian Constitution, a person can only be punished for conducts that have been defined to be criminal by written laws with the penalties specifically stated in the law. Although Nigeria has potent laws for the regulation of paper banking, these laws cannot be substituted for paperless transactions. This is because the issues involved in both transactions vary. The study also revealed that the absence of law in the cashless banking environment in Nigeria will subject consumers to endless risks. This study revealed that the creation of banking markets via the Internet relies on both available technologies and appropriate laws and regulations. It revealed however that Law of some of the countries considered on cashless banking has taken care of most of the legal issues and other problems prevalent in the cashless banking environment. The study also revealed some other problems prevalent in the Nigerian cashless banking environment. The study concluded that for Nigeria to find solutions to the legal issues raised in its cashless banking environment and other problems of cashless banking, it should have a viable legal Frame work for internet banking. The study concluded that the Central Bank of Nigeria’s Policy on Cashless banking is not potent enough to tackle the challenges posed to cashless banking in Nigeria because policies only have a persuasive effect and not a binding effect. There is, therefore, a need for appropriate Laws for the regulation of cashless Banking in Nigeria. The study also concluded that there is a need to create more awareness of the system among Nigerians and solve infrastructural problems like prevalent power outage which often have been creating internet network problem.

Keywords: cashless-banking, Nigeria, policies, laws

Procedia PDF Downloads 463
625 Usage of Cyanobacteria in Battery: Saving Money, Enhancing the Storage Capacity, Making Portable, and Supporting the Ecology

Authors: Saddam Husain Dhobi, Bikrant Karki

Abstract:

The main objective of this paper is save money, balance ecosystem of the terrestrial organism, control global warming, and enhancing the storage capacity of the battery with requiring weight and thinness by using Cyanobacteria in the battery. To fulfill this purpose of paper we can use different methods: Analysis, Biological, Chemistry, theoretical and Physics with some engineering design. Using this different method, we can produce the special type of battery that has the long life, high storage capacity, and clean environment, save money so on and by using the byproduct of Cyanobacteria i.e. glucose. Cyanobacteria are a special type of bacteria that produces different types of extracellular glucoses and oxygen with the help of little sunlight, water, and carbon dioxide and can survive in freshwater, marine and in the land as well. In this process, O₂ is more in the comparison to plant due to rapid growth rate of Cyanobacteria. The required materials are easily available in this process to produce glucose with the help of Cyanobacteria. Since CO₂, is greenhouse gas that causes the global warming? We can utilize this gas and save our ecological balance and the byproduct (glucose) C₆H₁₂O₆ can be utilized for raw material for the battery where as O₂ escape is utilized by living organism. The glucose produce by Cyanobateria goes on Krebs's Cycle or Citric Acid Cycle, in which glucose is complete, oxidizes and all the available energy from glucose molecule has been release in the form of electron and proton as energy. If we use a suitable anodes and cathodes, we can capture these electrons and protons to produce require electricity current with the help of byproduct of Cyanobacteria. According to "Virginia Tech Bio-battery" and "Sony" 13 enzymes and the air is used to produce nearly 24 electrons from a single glucose unit. In this output power of 0.8 mW/cm, current density of 6 mA/cm, and energy storage density of 596 Ah/kg. This last figure is impressive, at roughly 10 times the energy density of the lithium-ion batteries in your mobile devices. When we use Cyanobacteria in battery, we are able to reduce Carbon dioxide, Stop global warming, and enhancing the storage capacity of battery more than 10 times that of lithium battery, saving money, balancing ecology. In this way, we can produce energy from the Cyanobacteria and use it in battery for different benefits. In addition, due to the mass, size and easy cultivation, they are better to maintain the size of battery. Hence, we can use Cyanobacteria for the battery having suitable size, enhancing the storing capacity of battery, helping the environment, portability and so on.

Keywords: anode, byproduct, cathode, cyanobacteri, glucose, storage capacity

Procedia PDF Downloads 316