Search results for: interference of waves
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1221

Search results for: interference of waves

231 Portable System for the Acquisition and Processing of Electrocardiographic Signals to Obtain Different Metrics of Heart Rate Variability

Authors: Daniel F. Bohorquez, Luis M. Agudelo, Henry H. León

Abstract:

Heart rate variability (HRV) is defined as the temporary variation between heartbeats or RR intervals (distance between R waves in an electrocardiographic signal). This distance is currently a recognized biomarker. With the analysis of the distance, it is possible to assess the sympathetic and parasympathetic nervous systems. These systems are responsible for the regulation of the cardiac muscle. The analysis allows health specialists and researchers to diagnose various pathologies based on this variation. For the acquisition and analysis of HRV taken from a cardiac electrical signal, electronic equipment and analysis software that work independently are currently used. This complicates and delays the process of interpretation and diagnosis. With this delay, the health condition of patients can be put at greater risk. This can lead to an untimely treatment. This document presents a single portable device capable of acquiring electrocardiographic signals and calculating a total of 19 HRV metrics. This reduces the time required, resulting in a timelier intervention. The device has an electrocardiographic signal acquisition card attached to a microcontroller capable of transmitting the cardiac signal wirelessly to a mobile device. In addition, a mobile application was designed to analyze the cardiac waveform. The device calculates the RR and different metrics. The application allows a user to visualize in real-time the cardiac signal and the 19 metrics. The information is exported to a cloud database for remote analysis. The study was performed under controlled conditions in the simulated hospital of the Universidad de la Sabana, Colombia. A total of 60 signals were acquired and analyzed. The device was compared against two reference systems. The results show a strong level of correlation (r > 0.95, p < 0.05) between the 19 metrics compared. Therefore, the use of the portable system evaluated in clinical scenarios controlled by medical specialists and researchers is recommended for the evaluation of the condition of the cardiac system.

Keywords: biological signal análisis, heart rate variability (HRV), HRV metrics, mobile app, portable device.

Procedia PDF Downloads 148
230 Intrinsic Motivational Factor of Students in Learning Mathematics and Science Based on Electroencephalogram Signals

Authors: Norzaliza Md. Nor, Sh-Hussain Salleh, Mahyar Hamedi, Hadrina Hussain, Wahab Abdul Rahman

Abstract:

Motivational factor is mainly the students’ desire to involve in learning process. However, it also depends on the goal towards their involvement or non-involvement in academic activity. Even though, the students’ motivation might be in the same level, but the basis of their motivation may differ. In this study, it focuses on the intrinsic motivational factor which student enjoy learning or feeling of accomplishment the activity or study for its own sake. The intrinsic motivational factor of students in learning mathematics and science has found as difficult to be achieved because it depends on students’ interest. In the Program for International Student Assessment (PISA) for mathematics and science, Malaysia is ranked as third lowest. The main problem in Malaysian educational system, students tend to have extrinsic motivation which they have to score in exam in order to achieve a good result and enrolled as university students. The use of electroencephalogram (EEG) signals has found to be scarce especially to identify the students’ intrinsic motivational factor in learning science and mathematics. In this research study, we are identifying the correlation between precursor emotion and its dynamic emotion to verify the intrinsic motivational factor of students in learning mathematics and science. The 2-D Affective Space Model (ASM) was used in this research in order to identify the relationship of precursor emotion and its dynamic emotion based on the four basic emotions, happy, calm, fear and sad. These four basic emotions are required to be used as reference stimuli. Then, in order to capture the brain waves, EEG device was used, while Mel Frequency Cepstral Coefficient (MFCC) was adopted to be used for extracting the features before it will be feed to Multilayer Perceptron (MLP) to classify the valence and arousal axes for the ASM. The results show that the precursor emotion had an influence the dynamic emotions and it identifies that most students have no interest in mathematics and science according to the negative emotion (sad and fear) appear in the EEG signals. We hope that these results can help us further relate the behavior and intrinsic motivational factor of students towards learning of mathematics and science.

Keywords: EEG, MLP, MFCC, intrinsic motivational factor

Procedia PDF Downloads 334
229 Displacement and Cultural Capital in East Harlem: Use of Community Space in Affordable Artist Housing

Authors: Jun Ha Whang

Abstract:

As New York City weathers a swelling 'affordability crisis' marked by rapid transformation in land development and urban culture, much of the associated scholarly debate has turned to questions of the underlying mechanisms of gentrification. Though classically approached from the point of view of urban planning, increasingly these questions have been addressed with an eye to understanding the role of cultural capital in neighborhood valuation. This paper will examine the construction of an artist-specific affordable housing development in the Spanish Harlem neighborhood of Manhattan in order to identify and discuss several cultural parameters of gentrification. This study’s goal is not to argue that the development in question, named Art space PS 109, straightforwardly increases or decreases the rate of gentrification in Spanish Harlem, but rather to study dynamics present in the construction of Art space PS 109 as a case study considered against the broader landscape of gentrification in New York, particularly with respect to the impact of artist communities on housing supply. In the end, what Art space PS 109 most valuably offers us is a reference point for a comparative analysis of affordable housing strategies currently being pursued within municipal government. Our study of Art space PS 109 has allowed us to examine a microcosm of the city’s response and evaluate its overall strategy accordingly. As a base line, the city must aggressively pursue an affordability strategy specifically suited to the needs of each of its neighborhoods. It must also conduct this in such a way so as not to undermine its own efforts by rendering them susceptible to the exploitative involvement of real estate developers seeking to identify successive waves of trendy neighborhoods. Though Art space PS 109 offers an invaluable resource for the city’s legitimate aim of preserving its artist communities, with such a high inclusion rate of artists from outside of the community the project risks additional displacement, strongly suggesting the need for further study of the implications of sites of cultural capital for neighborhood planning.

Keywords: artist housing, displacement, east Harlem, urban planning

Procedia PDF Downloads 133
228 Interference of Mild Drought Stress on Estimation of Nitrogen Status in Winter Wheat by Some Vegetation Indices

Authors: H. Tavakoli, S. S. Mohtasebi, R. Alimardani, R. Gebbers

Abstract:

Nitrogen (N) is one of the most important agricultural inputs affecting crop growth, yield and quality in rain-fed cereal production. N demand of crops varies spatially across fields due to spatial differences in soil conditions. In addition, the response of a crop to the fertilizer applications is heavily reliant on plant available water. Matching N supply to water availability is thus essential to achieve an optimal crop response. The objective of this study was to determine effect of drought stress on estimation of nitrogen status of winter wheat by some vegetation indices. During the 2012 growing season, a field experiment was conducted at the Bundessortenamt (German Plant Variety Office) Marquardt experimental station which is located in the village of Marquardt about 5 km northwest of Potsdam, Germany (52°27' N, 12°57' E). The experiment was designed as a randomized split block design with two replications. Treatments consisted of four N fertilization rates (0, 60, 120 and 240 kg N ha-1, in total) and two water regimes (irrigated (Irr) and non-irrigated (NIrr)) in total of 16 plots with dimension of 4.5 × 9.0 m. The indices were calculated using readings of a spectroradiometer made of tec5 components. The main parts were two “Zeiss MMS1 nir enh” diode-array sensors with a nominal rage of 300 to 1150 nm with less than 10 nm resolutions and an effective range of 400 to 1000 nm. The following vegetation indices were calculated: NDVI, GNDVI, SR, MSR, NDRE, RDVI, REIP, SAVI, OSAVI, MSAVI, and PRI. All the experiments were conducted during the growing season in different plant growth stages including: stem elongation (BBCH=32-41), booting stage (BBCH=43), inflorescence emergence, heading (BBCH=56-58), flowering (BBCH=65-69), and development of fruit (BBCH=71). According to the results obtained, among the indices, NDRE and REIP were less affected by drought stress and can provide reliable wheat nitrogen status information, regardless of water status of the plant. They also showed strong relations with nitrogen status of winter wheat.

Keywords: nitrogen status, drought stress, vegetation indices, precision agriculture

Procedia PDF Downloads 285
227 Revealing Single Crystal Quality by Insight Diffraction Imaging Technique

Authors: Thu Nhi Tran Caliste

Abstract:

X-ray Bragg diffraction imaging (“topography”)entered into practical use when Lang designed an “easy” technical setup to characterise the defects / distortions in the high perfection crystals produced for the microelectronics industry. The use of this technique extended to all kind of high quality crystals, and deposited layers, and a series of publications explained, starting from the dynamical theory of diffraction, the contrast of the images of the defects. A quantitative version of “monochromatic topography” known as“Rocking Curve Imaging” (RCI) was implemented, by using synchrotron light and taking advantage of the dramatic improvement of the 2D-detectors and computerised image processing. The rough data is constituted by a number (~300) of images recorded along the diffraction (“rocking”) curve. If the quality of the crystal is such that a one-to-onerelation between a pixel of the detector and a voxel within the crystal can be established (this approximation is very well fulfilled if the local mosaic spread of the voxel is < 1 mradian), a software we developped provides, from the each rocking curve recorded on each of the pixels of the detector, not only the “voxel” integrated intensity (the only data provided by the previous techniques) but also its “mosaic spread” (FWHM) and peak position. We will show, based on many examples, that this new data, never recorded before, open the field to a highly enhanced characterization of the crystal and deposited layers. These examples include the characterization of dislocations and twins occurring during silicon growth, various growth features in Al203, GaNand CdTe (where the diffraction displays the Borrmannanomalous absorption, which leads to a new type of images), and the characterisation of the defects within deposited layers, or their effect on the substrate. We could also observe (due to the very high sensitivity of the setup installed on BM05, which allows revealing these faint effects) that, when dealing with very perfect crystals, the Kato’s interference fringes predicted by dynamical theory are also associated with very small modifications of the local FWHM and peak position (of the order of the µradian). This rather unexpected (at least for us) result appears to be in keeping with preliminary dynamical theory calculations.

Keywords: rocking curve imaging, X-ray diffraction, defect, distortion

Procedia PDF Downloads 98
226 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver

Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto

Abstract:

The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.

Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC

Procedia PDF Downloads 102
225 The Feasibility of a Protected Launch Site near Melkbosstrand for a Public Transport Ferry across Table Bay, Cape Town

Authors: Mardi Falck, André Theron

Abstract:

Traffic congestion on the Northern side of Table Bay is a major problem. In Gauteng, the implementation of the Gautrain between Pretoria and Johannesburg, solved their traffic congestion. In 2002 two entrepreneurs endeavoured to implement a hovercraft ferry service across the bay from Table View to the Port of Cape Town. However, the EIA process proved that disgruntled residents from the area did not agree with their location for a launch site. 17 years later the traffic problem has not gone away, but instead the congestion has increased. While property prices in the City Bowl of Cape Town are ever increasing, people tend to live more on the outskirts of the CBD and commute to work. This means more vehicles on the road every day and the public transport services cannot keep up with the demand. For this reason, the study area of the previous hovercraft plans is being extended further North. The study’s aim is thus to determine the feasibility of a launch site North of Bloubergstrand to launch and receive a public transport ferry across Table Bay. The feasibility is being established by researching ferry services across the world and on what makes them successful. Different types of ferries and their operational capacities in terms of weather and waves are researched and by establishing the offshore and nearshore wind and wave climate for the area, an appropriate protected launch site is determined. It was concluded that travel time could potentially be halved. A hovercraft proved to be the most feasible ferry type, because it does not require a conventional harbour. Other types of vessels require a protected launch site because of the wave climate. This means large breakwaters that influence the cost substantially. The Melkbos Cultural Centre proved to be the most viable option for the location of the launch site, because it already has buildings and infrastructure. It is recommended that, if a harbour is chosen for the proposed ferry service, it could be used for more services like fishing, eco-tourism and leisure. Further studies are recommended to optimise the feasibility of such a harbour.

Keywords: Cape Town, ferry, public, Table Bay

Procedia PDF Downloads 124
224 Optimized Parameters for Simultaneous Detection of Cd²⁺, Pb²⁺ and CO²⁺ Ions in Water Using Square Wave Voltammetry on the Unmodified Glassy Carbon Electrode

Authors: K. Sruthi, Sai Snehitha Yadavalli, Swathi Gosh Acharyya

Abstract:

Water is the most crucial element for sustaining life on earth. Increasing water pollution directly or indirectly leads to harmful effects on human life. Most of the heavy metal ions are harmful in their cationic form. These heavy metal ions are released by various activities like disposing of batteries, industrial wastes, automobile emissions, and soil contamination. Ions like (Pb, Co, Cd) are carcinogenic and show many harmful effects when consumed more than certain limits proposed by WHO. The simultaneous detection of the heavy metal ions (Pb, Co, Cd), which are highly toxic, is reported in this study. There are many analytical methods for quantifying, but electrochemical techniques are given high priority because of their sensitivity and ability to detect and recognize lower concentrations. Square wave voltammetry was preferred in electrochemical methods due to the absence of background currents which is interference. Square wave voltammetry was performed on GCE for the quantitative detection of ions. Three electrode system consisting of a glassy carbon electrode as the working electrode (3 mm diameter), Ag/Agcl electrode as the reference electrode, and a platinum wire as the counter electrode was chosen for experimentation. The mechanism of detection was done by optimizing the experimental parameters, namely pH, scan rate, and temperature. Under the optimized conditions, square wave voltammetry was performed for simultaneous detection. Scan rates were varied from 5 mV/s to 100 mV/s and found that at 25 mV/s all the three ions were detected simultaneously with proper peaks at particular stripping potential. The variation of pH from 3 to 8 was done where the optimized pH was taken as pH 5 which holds good for three ions. There was a decreasing trend at starting because of hydrogen gas evolution, and after pH 5 again there was a decreasing trend that is because of hydroxide formation on the surface of the working electrode (GCE). The temperature variation from 25˚C to 45˚C was done where the optimum temperature concerning three ions was taken as 35˚C. Deposition and stripping potentials were given as +1.5 V and -1.5 V, and the resting time of 150 seconds was given. Three ions were detected at stripping potentials of Cd²⁺ at -0.84 V, Pb²⁺ at -0.54 V, and Co²⁺ at -0.44 V. The parameters of detection were optimized on a glassy carbon electrode for simultaneous detection of the ions at lower concentrations by square wave voltammetry.

Keywords: cadmium, cobalt, lead, glassy carbon electrode, square wave anodic stripping voltammetry

Procedia PDF Downloads 72
223 The Journey from Lean Manufacturing to Industry 4.0: The Rail Manufacturing Process in Mexico

Authors: Diana Flores Galindo, Richard Gil Herrera

Abstract:

Nowadays, Lean Manufacturing and Industry 4.0 are very important in every country. One of the main benefits is continued market presence. It has been identified that there is a need to change existing educational programs, as well as update the knowledge and skills of existing employees. It should be borne in mind that behind each technological improvement, there is a human being. Human talent cannot be neglected. The main objectives of this article are to review the link between Lean Manufacturing, the incorporation of Industry 4.0 and the steps to follow to implement it; analyze the current situation and study the implications and benefits of this new trend, with a particular focus on Mexico. Lean Manufacturing and Industry 4.0 implementation waves must always take care of the most important capital – intellectual capital. The methodology used in this article comprised the following steps: reviewing the reality of the fourth industrial revolution, reviewing employees’ skills on the journey to become world-class, and analyzing the situation in Mexico. Lean Manufacturing and Industry 4.0 were studied not as exclusive concepts, but as complementary ones. The methodological framework used is focused on motivating companies’ collaborators to guarantee common results, innovate, and remain in the market in the face of new requirements from company stakeholders. The key findings were that both trends emphasize the need to improve communication across the entire company and incorporate new technologies into everyday work, from the shop floor to administrative staff, to help improve processes. Taking care of people, activities and processes will bring a company success. In the specific case of Mexico, companies in all sectors need to be aware of and implement technological improvements according to their specific needs. Low-cost labor represents one of the most typical barriers. In conclusion, companies must build a roadmap according to their strategy and needs to achieve their short, medium- and long-term goals.

Keywords: lean management, lean manufacturing, industry 4.0, motivation, SWOT analysis, Hoshin Kanri

Procedia PDF Downloads 116
222 Russian ‘Active Measures’: An Applicable Supporting Tool for Russia`s Foreign Policy Objectives in the 21st Century

Authors: Håkon Riiber

Abstract:

This paper explores the extent to which Russian ‘Active Measures’ play a role in contemporary Russian foreign policy and in what way the legacy of the Soviet Union is still apparent in these practices. The analysis draws on a set of case studies from the 21st century to examine these aspects, showing which ‘Active Measures’ features are old and which are new in the post-Cold War era. The paper highlights that the topic has gained significant academic and political interest in recent years, largely due to the aggressive posture of the Russian Federation on the world stage, exemplified through interventions in Estonia, Georgia, and Ukraine and interference in several democratic elections in the West. However, the paper argues that the long-term impact of these measures may have unintended implications for Russia. While Russia is unlikely to stop using Active Measures, increased awareness of the exploitation of weaknesses, institutions, or other targets may lead to greater security measures and an ability to identify and defend against these activities. The paper contends that Soviet-style ‘Active Measures’ from the Cold War era have been modernized and are now utilized to create an advantageous atmosphere for further exploitation to support contemporary Russian foreign policy. It offers three key points to support this argument: the reenergized legacy of the Cold War era, the use of ‘Active Measures’ in a number of cases in the 21st century, and the applicability of AM to the Russian approach to foreign policy. The analysis reveals that while this is not a new Russian phenomenon, it is still oversimplified and inaccurately understood by the West, which may result in a decreased ability to defend against these activities and limit the unwarranted escalation of the ongoing security situation between the West and Russia. The paper concludes that the legacy of Soviet-era Active Measures continues to influence Russian foreign policy, and modern technological advances have only made them more applicable to the current political climate. Overall, this paper sheds light on the important issue of Russian ‘Active Measures’ and the role they play in contemporary Russian foreign policy. It emphasizes the need for increased awareness, understanding, and security measures to defend against these activities and prevent further escalation of the security situation between the West and Russia.

Keywords: Russian espionage, active measures, disinformation, Russian intelligence

Procedia PDF Downloads 60
221 Aero-Hydrodynamic Model for a Floating Offshore Wind Turbine

Authors: Beatrice Fenu, Francesco Niosi, Giovanni Bracco, Giuliana Mattiazzo

Abstract:

In recent years, Europe has seen a great development of renewable energy, in a perspective of reducing polluting emissions and transitioning to cleaner forms of energy, as established by the European Green New Deal. Wind energy has come to cover almost 15% of European electricity needs andis constantly growing. In particular, far-offshore wind turbines are attractive from the point of view of exploiting high-speed winds and high wind availability. Considering offshore wind turbine siting that combines the resources analysis, the bathymetry, environmental regulations, and maritime traffic and considering the waves influence in the stability of the platform, the hydrodynamic characteristics of the platform become fundamental for the evaluation of the performances of the turbine, especially for the pitch motion. Many platform's geometries have been studied and used in the last few years. Their concept is based upon different considerations as hydrostatic stability, material, cost and mooring system. A new method to reach a high-performances substructure for different kinds of wind turbines is proposed. The system that considers substructure, mooring, and wind turbine is implemented in Orcaflex, and the simulations are performed considering several sea states and wind speeds. An external dynamic library is implemented for the turbine control system. The study shows the comparison among different substructures and the new concepts developed. In order to validate the model, CFD simulations will be performed by mean of STAR CCM+, and a comparison between rigid and elastic body for what concerns blades and tower will be carried out. A global model will be built to predict the productivity of the floating turbine according to siting, resources, substructure, and mooring. The Levelized Cost of Electricity (LCOE) of the system is estimated, giving a complete overview about the advantages of floating offshore wind turbine plants. Different case studies will be presented.

Keywords: aero-hydrodynamic model, computational fluid dynamics, floating offshore wind, siting, verification, and validation

Procedia PDF Downloads 180
220 Gentrification in Istanbul: The Twin Paradox

Authors: Tugce Caliskan

Abstract:

The gentrification literature in Turkey provided important insights regarding the analysis of the socio-spatial change in İstanbul mostly through the existing gentrification theories which were produced in Anglo-American literature. Yet early researches focused on the classical gentrification while failing to notice other place-specific forms of the phenomena. It was only after the mid-2000s that scholarly attention shifted to the recent discussions in the mainstream such as the neoliberal urban policies, government involvement, and resistance. Although these studies have considerable potential to contribute to the geography of gentrification, it seems that copying the linear timeline of Anglo-American conceptualization limited the space to introduce contextually nuanced way of process in Turkey. More specifically, the gentrification literature in Turkey acknowledged the linear timeline of the process drawing on the mainstream studies, and, made the spontaneous classical gentrification as the starting point in İstanbul at the expense of contextually specific forms of the phenomenon that took place in the same years. This paper is an attempt to understand place-specific forms of gentrification through the abandonment of the linear understanding of time. In this vein, this paper approaches the process as moving both linear and cyclical rather than the waves succeeded each other. Maintaining a dialectical relationship between the cyclical and the linear time, this paper investigates how the components of gentrification have been taken place in the cyclical timeline while becoming bolder in the linear timeline. This paper argues that taking the (re)investment in the secondary circuit of capital and class transformation as the core characteristics of gentrification, and accordingly, searching for these components beyond the linear timeline provide strategic value to decenter the perspectives, not merely for Turkish studies. In this vein, this strategy revealed that Western experience of gentrification did not travel, adopted or copied in Turkey but gentrification -as an abstract and general concept- has emerged as a product of different contextual, historical and temporal forces which must be considered within the framework of state-led urbanization as early as 1980 differing from the Global North trajectories.

Keywords: comparative urbanism, geography of gentrification, linear and cyclical timeline, state-led gentrification

Procedia PDF Downloads 75
219 Kinaesthetic Method in Apprenticeship Training: Support for Finnish Learning in Vocational Education

Authors: Inkeri Jääskeläinen

Abstract:

The purpose of this study is to shed light on what is it like to study in apprenticeship training using Finnish as second language. This study examines the stories and experiences of apprenticeship students learning and studying Finnish as part of their vocational studies. Also, this pilot study examines the effects of learning to pronounce Finnish through body motions and gestures. Many foreign students choose apprenticeships and start vocational training too early, while their language skills in Finnish are still very weak. Both duties at work and school assignments require reasonably good general language skills (B1.1) and, especially at work, language skills are also a safety issue. At work students should be able to simultaneously learn Finnish and do vocational studies in a noisy, demanding, and stressing environment. Learning and understanding new things is very challenging under these circumstances and sometimes students get exhausted and experience a lot of stress - which makes learning even more difficult. Students are different from each other and so are their ways to learn. Thereafter, one of the most important features of apprenticeship training and second language learning is good understanding of adult learners and their needs. Kinaesthetic methods are an effective way to support adult students’ cognitive skills and make learning more relaxing and fun. Empirical findings show that language learning can indeed be supported physical ways, by body motions and gestures. The method used here, named TFFL (Touch and Feel Foreign Languages), was designed to support adult language learning, to correct or prevent language fossilization and to help the student to manage emotions. Finnish is considered as a difficult language to learn, mostly because it is so different from nearly all other languages. Many learners complain that they are lost or confused and there is a need to find a way to simultaneously learn the language and to handle negative emotion which come from Finnish language and the learning process itself. Due to the nature of Finnish language good pronunciation skills are needed just to understand the way the language work. Movements (body movements etc.) are a natural part of many cultures but not Finnish – In Finland students have traditionally been expected to stay still and that is not a natural way for many foreign students. However, kinaesthetic TFFL method proved out to be a useful way to help some L2 students to feel phonemes, rhythm and intonation, to improve their Finnish and, thereby, also to successfully complete their vocational studies.

Keywords: Finnish, fossilization, interference, kinaesthetic method

Procedia PDF Downloads 79
218 The Effectiveness of First World Asylum Practices in Deterring Applications, Offering Bureaucratic Deniability, and Violating Human Rights: A Greek Case Study

Authors: Claudia Huerta, Pepijn Doornenbal, Walaa Elsiddig

Abstract:

Rising waves of nationalism around the world have led first-world migration receiving countries to exploit the ambiguity of international refugee law and establish asylum application processes that deter applications, allow for bureaucratic deniability, and violate human rights. This case study of Greek asylum application practices argues that the 'pre-application' asylum process in Greece violates the spirit of international law by making it incredibly difficult for potential asylum seekers to apply for asylum, in essence violating the human rights of thousands of asylum seekers. This study’s focus is on the Greek mainland’s asylum 'pre-application' process, which in 2016 began to require those wishing to apply for asylum to do so during extremely restricted hours via a basic Skype line. The average wait to simply begin the registration process to apply for asylum is 81 days, during which time applicants are forced to live illegally in Greece. This study’s methodology in analyzing the 'pre-application' process consists of hours of interviews with asylum seekers, NGOs, and the Asylum Service office on the ground in Athens, as well as an analysis of the Greek Asylum Service historical asylum registration statistics. This study presents three main findings: the delays associated with the Skype system in Greece are the result of system design, as proven by a statistical analysis of Greek asylum registrations, NGOs have been co-opted by the state to perform state functions during the process, and the government’s use of technology is both purposefully lazy and discriminatory. In conclusion, the study argues that such asylum practices are part of a pattern of first-world migration receiving countries policies’ which discourage asylum seekers from applying and fall short of the standards in international law.

Keywords: asylum, European Union, governance, Greece, irregular, migration, policy, refugee, Skype

Procedia PDF Downloads 98
217 Kinaesthetic Method in Apprenticeship Training: Support for Finnish Learning in Vocational Education and Training

Authors: Inkeri Jaaskelainen

Abstract:

The purpose of this study is to shed light on what it is like to study in apprenticeship training using Finnish as a second language. This study examines the stories and experiences of apprenticeship students learning and studying Finnish as part of their vocational studies. Also, this pilot study examines the effects of learning to pronounce Finnish through body motions and gestures. Many foreign students choose apprenticeships and start vocational training too early, while their language skills in Finnish are still very weak. Both duties at work and school assignments require reasonably good general language skills (B1.1), and, especially at work, language skills are also a safety issue. At work, students should be able to simultaneously learn Finnish and do vocational studies in a noisy, demanding, and stressful environment. Learning and understanding new things is very challenging under these circumstances and sometimes students get exhausted and experience a lot of stress - which makes learning even more difficult. Students are different from each other and so are their ways to learn. Thereafter, one of the most important features of apprenticeship training and second language learning is a good understanding of adult learners and their needs. Kinaesthetic methods are an effective way to support adult students’ cognitive skills and make learning more relaxing and fun. Empirical findings show that language learning can indeed be supported in physical ways, by body motions and gestures. The method used here, named TFFL (Touch and Feel Foreign Languages), was designed to support adult language learning, to correct or prevent language fossilization, and to help the student to manage emotions. Finnish is considered as a difficult language to learn, mostly because it is so different from nearly all other languages. Many learners complain that they are lost or confused and there is a need to find a way to simultaneously learn the language and to handle negative emotion that comes from the Finnish language and the learning process itself. Due to the nature of the Finnish language, good pronunciation skills are needed just to understand the way the language work. Movements (body movements etc.) are a natural part of many cultures, but not Finnish. In Finland, students have traditionally been expected to stay still, and that is not a natural way for many foreign students. However, the kinaesthetic TFFL method proved out to be a useful way to help some L2 students to feel phonemes, rhythm, and intonation, to improve their Finnish, and, thereby, also to successfully complete their vocational studies.

Keywords: Finnish, fossilization, interference, kinaesthetic method

Procedia PDF Downloads 104
216 Assessing the Impact of Heatwaves on Intertidal Mudflat Colonized by an Exotic Mussel

Authors: Marie Fouet, Olivier Maire, Cécile Masse, Hugues Blanchet, Salomé Coignard, Nicolas Lavesque, Guillaume Bernard

Abstract:

Exacerbated by global change, extreme climatic events such as atmospheric and marine heat waves may interact with the spread of non-indigenous species and their associated impacts on marine ecosystems. Since the 1970’s, the introduction of non-indigenous species due to oyster exchanges has been numerous. Among them, the Asian date mussel Arcuatula senhousia has colonized a large number of ecosystems worldwide (e.g., California, New Zealand, Italy). In these places, A.senhousia led to important habitat modifications in the benthic compartment through physical, biological, and biogeochemical effects associated with the development of dense mussel populations. In Arcachon Bay (France), a coastal lagoon of the French Atlantic and hotspot of oyster farming, abundances of A. senhousia recently increased, following a lag time of ca. 20 years since the first record of the species in 2002. Here, we addressed the potential effects of the interaction between A. senhousia invasion and heatwave intensity on ecosystem functioning within an intertidal mudflat. More precisely, two realistic intensities (“High” and “Severe”) of combined marine and atmospheric heatwaves have been simulated in an experimental tidal mesocosm system onto which naturally varying densities of A. senhousia and associated benthic communities were exposed in sediment cores collected in situ. Following a six-day exposure, community-scale responses were assessed by measuring benthic metabolism (oxygen and nutrient fluxes) in each core. Results show that besides significantly enhanced benthic metabolism with increasing heatwave intensity, mussel density clearly mediated the magnitude of the community-scale response, thereby highlighting the importance of understanding the interactive effects of environmental stressors co-occurring with non-indigenous species and their dependencies for a better assessment of their impacts.

Keywords: arcuatula senhousia, benthic habitat, ecosystem functioning, heatwaves, metabolism

Procedia PDF Downloads 21
215 Sponge Urbanism as a Resilient City Design to Overcome Urban Flood Risk, for the Case of Aluva, Kerala, India

Authors: Gayathri Pramod, Sheeja K. P.

Abstract:

Urban flooding has been seen rising in cities for the past few years. This rise in urban flooding is the result of increasing urbanization and increasing climate change. A resilient city design focuses on 'living with water'. This means that the city is capable of accommodating the floodwaters without having to risk any loss of lives or properties. The resilient city design incorporates green infrastructure, river edge treatment, open space design, etc. to form a city that functions as a whole for resilience. Sponge urbanism is a recent method for building resilient cities and is founded by China in 2014. Sponge urbanism is the apt method for resilience building for a tropical town like Aluva of Kerala. Aluva is a tropical town that experiences rainfall of about 783 mm per month during the rainy season. Aluva is an urbanized town which faces the risk of urban flooding and riverine every year due to the presence of Periyar River in the town. Impervious surfaces and hard construction and developments contribute towards flood risk by posing as interference for a natural flow and natural filtration of water into the ground. This type of development is seen in Aluva also. Aluva is designed in this research as a town that have resilient strategies of sponge city and which focusses on natural methods of construction. The flood susceptibility of Aluva is taken into account to design the spaces for sponge urbanism and in turn, reduce the flood susceptibility for the town. Aluva is analyzed, and high-risk zones for development are identified through studies. These zones are designed to withstand the risk of flooding. Various catchment areas are identified according to the natural flow of water, and then these catchment areas are designed to act as a public open space and as detention ponds in case of heavy rainfall. Various development guidelines, according to land use, is also prescribed, which help in increasing the green cover of the town. Aluva is then designed to be a completely flood-adapted city or sponge city according to the guidelines and interventions.

Keywords: climate change, flooding, resilient city, sponge city, sponge urbanism, urbanization

Procedia PDF Downloads 120
214 Signal Processing Techniques for Adaptive Beamforming with Robustness

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.

Keywords: adaptive beamforming, robustness, signal blocking, steering angle error

Procedia PDF Downloads 88
213 An Unusual Cause of Electrocardiographic Artefact: Patient's Warming Blanket

Authors: Sanjay Dhiraaj, Puneet Goyal, Aditya Kapoor, Gaurav Misra

Abstract:

In electrocardiography, an ECG artefact is used to indicate something that is not heart-made. Although technological advancements have produced monitors with the potential of providing accurate information and reliable heart rate alarms, despite this, interference of the displayed electrocardiogram still occurs. These interferences can be from the various electrical gadgets present in the operating room or electrical signals from other parts of the body. Artefacts may also occur due to poor electrode contact with the body or due to machine malfunction. Knowing these artefacts is of utmost importance so as to avoid unnecessary and unwarranted diagnostic as well as interventional procedures. We report a case of ECG artefacts occurring due to patient warming blanket and its consequences. A 20-year-old male with a preoperative diagnosis of exstrophy epispadias complex was posted for surgery under epidural and general anaesthesia. Just after endotracheal intubation, we observed nonspecific ECG changes on the monitor. At a first glance, the monitor strip revealed broad QRs complexes suggesting a ventricular bigeminal rhythm. Closer analysis revealed these to be artefacts because although the complexes were looking broad on the first glance there was clear presence of normal sinus complexes which were immediately followed by 'broad complexes' or artefacts produced by some device or connection. These broad complexes were labeled as artefacts as they were originating in the absolute refractory period of the previous normal sinus beat. It would be physiologically impossible for the myocardium to depolarize so rapidly as to produce a second QRS complex. A search for the possible reason for the artefacts was made and after deepening the plane of anaesthesia, ruling out any possible electrolyte abnormalities, checking of ECG leads and its connections, changing monitors, checking all other monitoring connections, checking for proper grounding of anaesthesia machine and OT table, we found that after switching off the patient’s warming apparatus the rhythm returned to a normal sinus one and the 'broad complexes' or artefacts disappeared. As misdiagnosis of ECG artefacts may subject patients to unnecessary diagnostic and therapeutic interventions so a thorough knowledge of the patient and monitors allow for a quick interpretation and resolution of the problem.

Keywords: ECG artefacts, patient warming blanket, peri-operative arrhythmias, mobile messaging services

Procedia PDF Downloads 239
212 Analyzing the Sound of Space - The Glissando of the Planets and the Spiral Movement on the Sound of Earth, Saturn and Jupiter

Authors: L. Tonia, I. Daglis, W. Kurth

Abstract:

The sound of the universe creates an affinity with the sounds of music. The analysis of the sound of space focuses on the existence of a tone material, the microstructure and macrostructure, and the form of the sound through the signals recorded during the flight of the spacecraft Van Allen Probes and Cassini’s mission. The sound becomes from the frequencies that belong to electromagnetic waves. Plasma Wave Science Instrument and Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) recorded the signals from space. A transformation of that signals to audio gave the opportunity to study and analyze the sound. Due to the fact that the musical tone pitch has a frequency and every electromagnetic wave produces a frequency too, the creation of a musical score, which appears as the sound of space, can give information about the form, the symmetry, and the harmony of the sound. The conversion of space radio emissions to audio provides a number of tone pitches corresponding to the original frequencies. Through the process of these sounds, we have the opportunity to present a music score that “composed” from space. In this score, we can see some basic features associated with the music form, the structure, the tone center of music material, the construction and deconstruction of the sound. The structure, which was built through a harmonic world, includes tone centers, major and minor scales, sequences of chords, and types of cadences. The form of the sound represents the symmetry of a spiral movement not only in micro-structural but also to macro-structural shape. Multiple glissando sounds in linear and polyphonic process of the sound, founded in magnetic fields around Earth, Saturn, and Jupiter, but also a spiral movement appeared on the spectrogram of the sound. Whistles, Auroral Kilometric Radiations, and Chorus emissions reveal movements similar to musical excerpts of works by contemporary composers like Sofia Gubaidulina, Iannis Xenakis, EinojuhamiRautavara.

Keywords: space sound analysis, spiral, space music, analysis

Procedia PDF Downloads 143
211 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning

Authors: Madhawa Basnayaka, Jouni Paltakari

Abstract:

Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.

Keywords: artificial intelligence, chipless RFID, deep learning, machine learning

Procedia PDF Downloads 11
210 EEG and ABER Abnormalities in Children with Speech and Language Delay

Authors: Bharati Mehta, Manish Parakh, Bharti Bhandari, Sneha Ambwani

Abstract:

Speech and language delay (SLD) is seen commonly as a co-morbidity in children having severe resistant focal and generalized, syndromic and symptomatic epilepsies. It is however not clear whether epilepsy contributes to or is a mere association in the pathogenesis of SLD. Also, it is acknowledged that Auditory Brainstem Evoked Responses (ABER), besides used for evaluating hearing threshold, also aid in prognostication of neurological disorders and abnormalities in the hearing pathway in the brainstem. There is no circumscribed or surrogate neurophysiologic laboratory marker to adjudge the extent of SLD. The current study was designed to evaluate the abnormalities in Electroencephalography (EEG) and ABER in children with SLD who do not have an overt hearing deficit or autism. 94 children of age group 2-8 years with predominant SLD and without any gross motor developmental delay, head injury, gross hearing disorder, cleft lip/palate and autism were selected. Standard video Electroencephalography using the 10:20 international system and ABER after click stimulus with intensities 110 db until 40 db was performed in all children. EEG was abnormal in 47.9% (n= 45; 36 boys and 9 girls) children. In the children with abnormal EEG, 64.5% (n=29) had an abnormal background, 57.8% (n=27) had presence of generalized interictal epileptiform discharges (IEDs), 20% (n=9) had focal epileptiform discharges exclusively from left side and 33.3% (n=15) had multifocal IEDs occurring both in isolation or associated with generalised abnormalities. In ABER, surprisingly, the peak latencies for waves I, III & V, inter-peak latencies I-III & I-V, III-V and wave amplitude ratio V/I, were found within normal limits in both ears of all the children. Thus in the current study it is certain that presence of generalized IEDs in EEG are seen in higher frequency with SLD and focal IEDs are seen exclusively in left hemisphere in these children. It may be possible that even with generalized EEG abnormalities present in these children, left hemispheric abnormalities as a part of this generalized dysfunction may be responsible for the speech and language dysfunction. The current study also emphasizes that ABER may not be routinely recommended as diagnostic or prognostic tool in children with SLD without frank hearing deficit or autism, thus reducing the burden on electro physiologists, laboratories and saving time and financial resources.

Keywords: ABER, EEG, speech, language delay

Procedia PDF Downloads 483
209 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 417
208 Change in Self-Reported Personality in Students of Acting

Authors: Nemanja Kidzin, Danka Puric

Abstract:

Recently, the field of personality change has received an increasing amount of attention. Previously under-researched variables, such as the intention to change or taking on new social roles (in a working environment, education, family, etc.), have been shown to be relevant for personality change. Following this line of research, our study aimed to determine whether the process of acting can bring about personality changes in students of acting and, if yes, in which way. We hypothesized that there will be a significant difference between self-reported personality traits of students acting at the beginning and the end of preparing for a role. Additionally, as potential moderator variables, we measured the reported personality traits of the roles the students were acting, as well as empathy, disintegration, and years of formal education. The sample (N = 47) was composed of students of acting from the Faculty of Dramatic Arts (first- to fourth-year) and the Faculty of Modern Arts (first-year students only). Participants' mean age was 20.2 (SD = 1.47), and there were 64% of females. The procedure included two waves of testing (T1 at the beginning and T2 at the end of the semester), and students’ acting exercises and character immersion comprised the pseudo-experimental procedure. Students’ personality traits (HEXACO-60, self-report version), empathy (Questionnaire of Cognitive and Affective Empathy, QCAE), and disintegration (DELTA9, 10-item version) were measured at both T1 and T2, while the personality of the role (HEXACO-60 observer version) was measured at T2. Responses to all instruments were given on a 5-point Likert scale. A series of repeated-measures T-tests showed significant differences in emotionality (t(46) = 2.56, p = 0.014) and conscientiousness (t(46) = -2.39, p = 0.021) between T1 and T2. Moreover, an index of absolute personality change was significantly different from 0 for all traits (range .53 to .34, t(46) = 4.20, p < .001 for the lowest index. The average test-retest correlation for HEXACO traits was 0.57, which is lower than proposed by other similar researches. As for moderator variables, neither the personality of the role nor empathy or disintegration explained the change in students’ personality traits. The magnitude of personality change was the highest in fourth-year students, with no significant differences between the remaining three years of studying. Overall, our results seem to indicate some personality changes in students of acting. However, these changes cannot be unequivocally related to the process of preparing for a role. Further and methodologically stricter research is needed to unravel the role of acting in personality change.

Keywords: theater, personality change, acting, HEXACO

Procedia PDF Downloads 141
207 Effect of Climate Variability on Children Health Outcomes in Rural Uganda

Authors: Emily Injete Amondo, Alisher Mirzabaev, Emmanuel Rukundo

Abstract:

Children in rural farming households are often vulnerable to a multitude of risks, including health risks associated with climate change and variability. Cognizant of this, this study empirically traced the relationship between climate variability and nutritional health outcomes in rural children while identifying the cause-and-effect transmission mechanisms. We combined four waves of the rich Uganda National Panel Survey (UNPS), part of the World Bank Living Standards Measurement Studies (LSMS) for the period 2009-2014, with long-term and high-frequency rainfall and temperature datasets. Self-reported drought and flood shock variables were further used in separate regressions for triangulation purposes and robustness checks. Panel fixed effects regressions were applied in the empirical analysis, accounting for a variety of causal identification issues. The results showed significant negative outcomes for children’s anthropometric measurements due to the impacts of moderate and extreme droughts, extreme wet spells, and heatwaves. On the contrary, moderate wet spells were positively linked with nutritional measures. Agricultural production and child diarrhea were the main transmission channels, with heatwaves, droughts, and high rainfall variability negatively affecting crop output. The probability of diarrhea was positively related to increases in temperature and dry spells. Results further revealed that children in households who engaged in ex-ante or anticipatory risk-reducing strategies such as savings had better health outcomes as opposed to those engaged in ex-post coping such as involuntary change of diet. These results highlight the importance of adaptation in smoothing the harmful effects of climate variability on the health of rural households and children in Uganda.

Keywords: extreme weather events, undernutrition, diarrhea, agricultural production, gridded weather data

Procedia PDF Downloads 72
206 Diversity of Large Mammals in Awash National Park and its Ecosystem Role and Biodiversity Conservation, Ethiopia

Authors: Sintayehu W. Dejene

Abstract:

An ecological and biodiversity conservation study on species composition, population status and habitat association of large mammals and the impact of human interference on their distribution was carried out in Awash National Park, Ethiopia during October, 2012 to July, 2013. A total of 25 species of large mammals were recorded from the study area. Representative sample sites were taken from each habitat type and surveyed using random line transect method. For medium and large mammal survey, indirect methods (foot print and dung) and direct observations were used. Twenty three species of medium to large-sized mammals were identified and recorded from ANP. A total of 25 species of median and large size mammals were recorded from the study area. Out of this, 20 species were rodents of three families and five species were insectivores of two families. Beisa Oryx (Oryx beisa beisa),Soemmerings gazelle (Gazella soemmeringi),Defassa waterbuck (Kobus defassa), Lesser Kudu (Strepsiceros imberbis), Greater Kudu (Strepsiceros strepsiceros), Warthog (Phacochoerus aethiopicus), Baboon (Papio anubis baboon) and Salt's dikdik (Madoqua saltiana) were the most common seen median and large mammals in the study area. Beisa Oryx (Oryx beisa beisa) and Sommering Gazelles (Gazella soemmeringi) are commonly found in the open areas, where as Greater Kudus (Strepsiceros strepsiceros) and Lesser Kudus (Strepsiceros imberbis) was seen in the bushed areas. Defarsa waterbuck (Kobus defassa) was observed in the bushy river area in Northern part of the Park. Anubis baboon (Papio anubis baboon) was seen near to the river side. Hamadryas baboon founded in semi-desert areas of Awash National Park, particularly in Filwoha area. The area is one of a key biodiversity conservation and provide pure water, air, food, grazing land and storage of carbon.

Keywords: awash national park, biodiversity, ecosystem value, habitat association, large mammals, population status, species composition

Procedia PDF Downloads 351
205 An Experimental Investigation on Explosive Phase Change of Liquefied Propane During a Bleve Event

Authors: Frederic Heymes, Michael Albrecht Birk, Roland Eyssette

Abstract:

Boiling Liquid Expanding Vapor Explosion (BLEVE) has been a well know industrial accident for over 6 decades now, and yet it is still poorly predicted and avoided. BLEVE is created when a vessel containing a pressure liquefied gas (PLG) is engulfed in a fire until the tank rupture. At this time, the pressure drops suddenly, leading the liquid to be in a superheated state. The vapor expansion and the violent boiling of the liquid produce several shock waves. This works aimed at understanding the contribution of vapor ad liquid phases in the overpressure generation in the near field. An experimental work was undertaken at a small scale to reproduce realistic BLEVE explosions. Key parameters were controlled through the experiments, such as failure pressure, fluid mass in the vessel, and weakened length of the vessel. Thirty-four propane BLEVEs were then performed to collect data on scenarios similar to common industrial cases. The aerial overpressure was recorded all around the vessel, and also the internal pressure changed during the explosion and ground loading under the vessel. Several high-speed cameras were used to see the vessel explosion and the blast creation by shadowgraph. Results highlight how the pressure field is anisotropic around the cylindrical vessel and highlights a strong dependency between vapor content and maximum overpressure from the lead shock. The time chronology of events reveals that the vapor phase is the main contributor to the aerial overpressure peak. A prediction model is built upon this assumption. Secondary flow patterns are observed after the lead. A theory on how the second shock observed in experiments forms is exposed thanks to an analogy with numerical simulation. The phase change dynamics are also discussed thanks to a window in the vessel. Ground loading measurements are finally presented and discussed to give insight into the order of magnitude of the force.

Keywords: phase change, superheated state, explosion, vapor expansion, blast, shock wave, pressure liquefied gas

Procedia PDF Downloads 42
204 Subsidiary Entrepreneurial Orientation, Trust in Headquarters and Performance: The Mediating Role of Autonomy

Authors: Zhang Qingzhong

Abstract:

Though there exists an increasing number of research studies on the headquarters-subsidiary relationship, and within this context, there is a focus on subsidiaries' contributory role to multinational corporations (MNC), subsidiary autonomy, and the conditions under which autonomy exerts an effect on subsidiary performance still constitute a subject of debate in the literature. The objective of this research is to study the MNC subsidiary autonomy and performance relationship and the effect of subsidiary entrepreneurial orientation and trust on subsidiary autonomy in the China environment, a phenomenon that has not yet been studied. The research addresses the following three questions: (i) Is subsidiary autonomy associated with MNC subsidiary performance in the China environment? (ii) How do subsidiary entrepreneurship and its trust in headquarters affect the level of subsidiary autonomy and its relationship with subsidiary performance? (iii) Does subsidiary autonomy have a mediating effect on subsidiary performance with subsidiary’s entrepreneurship and trust in headquarters? In the present study, we have reviewed literature and conducted semi-structured interviews with multinational corporation (MNC) subsidiary senior executives in China. Building on our insights from the interviews and taking perspectives from four theories, namely the resource-based view (RBV), resource dependency theory, integration-responsiveness framework, and social exchange theory, as well as the extant articles on subsidiary autonomy, entrepreneurial orientation, trust, and subsidiary performance, we have developed a model and have explored the direct and mediating effects of subsidiary autonomy on subsidiary performance within the framework of the MNC. To test the model, we collected and analyzed data based on cross-industry two waves of an online survey from 102 subsidiaries of MNCs in China. We used structural equation modeling to test measurement, direct effect model, and conceptual framework with hypotheses. Our findings confirm that (a) subsidiary autonomy is positively related to subsidiary performance; (b) subsidiary entrepreneurial orientation is positively related to subsidiary autonomy; (c) subsidiary’s trust in headquarters has a positive effect on subsidiary autonomy; (d) subsidiary autonomy mediates the relationship between entrepreneurial orientation and subsidiary performance; (e) subsidiary autonomy mediates the relationship between trust and subsidiary performance. Our study highlights the important role of subsidiary autonomy in leveraging the resource of subsidiary entrepreneurial orientation and its trust relationship with headquarters to achieve high performance. We discuss the theoretical and managerial implications of the findings and propose directions for future research.

Keywords: subsidiary entrepreneurial orientation, trust, subsidiary autonomy, subsidiary performance

Procedia PDF Downloads 158
203 Quantum Coherence Sets the Quantum Speed Limit for Mixed States

Authors: Debasis Mondal, Chandan Datta, S. K. Sazim

Abstract:

Quantum coherence is a key resource like entanglement and discord in quantum information theory. Wigner- Yanase skew information, which was shown to be the quantum part of the uncertainty, has recently been projected as an observable measure of quantum coherence. On the other hand, the quantum speed limit has been established as an important notion for developing the ultra-speed quantum computer and communication channel. Here, we show that both of these quantities are related. Thus, cast coherence as a resource to control the speed of quantum communication. In this work, we address three basic and fundamental questions. There have been rigorous attempts to achieve more and tighter evolution time bounds and to generalize them for mixed states. However, we are yet to know (i) what is the ultimate limit of quantum speed? (ii) Can we measure this speed of quantum evolution in the interferometry by measuring a physically realizable quantity? Most of the bounds in the literature are either not measurable in the interference experiments or not tight enough. As a result, cannot be effectively used in the experiments on quantum metrology, quantum thermodynamics, and quantum communication and especially in Unruh effect detection et cetera, where a small fluctuation in a parameter is needed to be detected. Therefore, a search for the tightest yet experimentally realisable bound is a need of the hour. It will be much more interesting if one can relate various properties of the states or operations, such as coherence, asymmetry, dimension, quantum correlations et cetera and QSL. Although, these understandings may help us to control and manipulate the speed of communication, apart from the particular cases like the Josephson junction and multipartite scenario, there has been a little advancement in this direction. Therefore, the third question we ask: (iii) Can we relate such quantities with QSL? In this paper, we address these fundamental questions and show that quantum coherence or asymmetry plays an important role in setting the QSL. An important question in the study of quantum speed limit may be how it behaves under classical mixing and partial elimination of states. This is because this may help us to choose properly a state or evolution operator to control the speed limit. In this paper, we try to address this question and show that the product of the time bound of the evolution and the quantum part of the uncertainty in energy or quantum coherence or asymmetry of the state with respect to the evolution operator decreases under classical mixing and partial elimination of states.

Keywords: completely positive trace preserving maps, quantum coherence, quantum speed limit, Wigner-Yanase Skew information

Procedia PDF Downloads 316
202 Impact of Enzyme-Treated Bran on the Physical and Functional Properties of Extruded Sorghum Snacks

Authors: Charles Kwasi Antwi, Mohammad Naushad Emmambux, Natalia Rosa-Sibakov

Abstract:

The consumption of high-fibre snacks is beneficial in reducing the prevalence of most non-communicable diseases and improving human health. However, using high-fibre flour to produce snacks by extrusion cooking reduces the expansion ratio of snacks, thereby decreasing sensory properties and consumer acceptability of the snack. The study determines the effects of adding Viscozyme®-treated sorghum bran on the properties of extruded sorghum snacks with the aim of producing high-fibre expanded snacks with acceptable quality. With a twin-screw extruder, sorghum endosperm flour [by decortication] with and without sorghum bran and with enzyme-treated sorghum bran was extruded at high shear rates with feed moisture of 20%, feed rate of 10 kg/hr, screw speed of 500 rpm, and temperature zones of 60°C, 70°C, 80°C, 140°C, and 140°C toward the die. The expanded snacks that resulted from this process were analysed in terms of their physical (expansion ratio, bulk density, colour profile), chemical (soluble and insoluble dietary fibre), and functional (water solubility index (WSI) and water absorption index (WAI)) characteristics. The expanded snacks produced from refined sorghum flour enriched with Viscozyme-treated bran had similar expansion ratios to refined sorghum flour extrudates, which were higher than those for untreated bran-sorghum extrudate. Sorghum extrudates without bran showed higher values of expansion ratio and low values of bulk density compared to the untreated bran extrudates. The enzyme-treated fibre increased the expansion ratio significantly with low bulk density values compared to untreated bran. Compared to untreated bran extrudates, WSI values in enzyme-treated samples increased, while WAI values decreased. Enzyme treatment of bran reduced particle size and increased soluble dietary fibre to increase expansion. Lower particle size suggests less interference with bubble formation at the die. Viscozyme-treated bran-sorghum composite flour could be used as raw material to produce high-fibre expanded snacks with improved physicochemical and functional properties.

Keywords: extrusion, sorghum bran, decortication, expanded snacks

Procedia PDF Downloads 49