Search results for: motion capture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2409

Search results for: motion capture

399 Nanorods Based Dielectrophoresis for Protein Concentration and Immunoassay

Authors: Zhen Cao, Yu Zhu, Junxue Fu

Abstract:

Immunoassay, i.e., antigen-antibody reaction, is crucial for disease diagnostics. To achieve the adequate signal of the antigen protein detection, a large amount of sample and long incubation time is needed. However, the amount of protein is usually small at the early stage, which makes it difficult to detect. Unlike cells and DNAs, no valid chemical method exists for protein amplification. Thus, an alternative way to improve the signal is through particle manipulation techniques to concentrate proteins, among which dielectrophoresis (DEP) is an effective one. DEP is a technique that concentrates particles to the designated region through a force created by the gradient in a non-uniform electric field. Since DEP force is proportional to the cube of particle size and square of electric field gradient, it is relatively easy to capture larger particles such as cells. For smaller ones like proteins, a super high gradient is then required. In this work, three-dimensional Ag/SiO2 nanorods arrays, fabricated by an easy physical vapor deposition technique called as oblique angle deposition, have been integrated with a DEP device and created the field gradient as high as of 2.6×10²⁴ V²/m³. The nanorods based DEP device is able to enrich bovine serum albumin (BSA) protein by 1800-fold and the rate has reached 180-fold/s when only applying 5 V electric potential. Based on the above nanorods integrated DEP platform, an immunoassay of mouse immunoglobulin G (IgG) proteins has been performed. Briefly, specific antibodies are immobilized onto nanorods, then IgG proteins are concentrated and captured, and finally, the signal from fluorescence-labelled antibodies are detected. The limit of detection (LoD) is measured as 275.3 fg/mL (~1.8 fM), which is a 20,000-fold enhancement compared with identical assays performed on blank glass plates. Further, prostate-specific antigen (PSA), which is a cancer biomarker for diagnosis of prostate cancer after radical prostatectomy, is also quantified with a LoD as low as 2.6 pg/mL. The time to signal saturation has been significantly reduced to one minute. In summary, together with an easy nanorod fabrication and integration method, this nanorods based DEP platform has demonstrated highly sensitive immunoassay performance and thus poses great potentials in applications for early point-of-care diagnostics.

Keywords: dielectrophoresis, immunoassay, oblique angle deposition, protein concentration

Procedia PDF Downloads 82
398 Costume Portrayal In K. Asif’s Mughal E Azam

Authors: Anketa Kumar, Rajantheran Al Muniandy, Rishabh Kumar

Abstract:

For centuries, Indian costumes are admired for their great aesthetics, functional and narrative qualities. The purpose of the current study is to investigate the role of costumes as visual narratives in Hindi Cinema as Filmmaking is simply one of the most recent manifestations of the human desire to tell stories in which costume acts as a tool to be read as an Intertext by the viewers watching the films. The problem that promoted this study arose when clothes become an interesting topic when examined within the social structures in which they are worn. It is this visual image of dress worn by the character that is investigated in this research through Hindi Cinema of the 1960s, which was a reflection of the society in the realistic form. This research intends to integrate the application of Roland Barthes Semiotic theory in analyzing main movie characters in the National Award-Winning Hindi movie Mughal e Azam (1960). The research helps in filling the gap between the singular level of interpretation and another level that offers a solution towards bridging the gap in viewers' manifold interpretation of a particular movie product. This study focuses on how visual appearance communicates for building up of perception and can relate to notions of realism, defining cultural identity and status in the society. The research methodology is subjected analytical technique that employs in this research is qualitative and descriptive in nature with the use of the Freeze frame technique. The portrayal of costumes is explained with Barthes' principles of Semiotics. The freeze-frame technique stops the motion of the film on a single frame and allows the chosen image to be read as a still photograph. The finding during this research into costume portrayal in the movie was that freezing the frame in midst of running the films attracted attention towards intricate costume details, leading to record the nuanced observations of this minutiae during the movie. Given that during the application of interpretation while watching K Asif’s Mughal e Azam focused on certain aspects of costumes of the king. On the same idea, further research can be employed to strengthen the relation between costumes and visual narration.

Keywords: character portrayal, costumes, Indian cinema, semiotics, visual significance

Procedia PDF Downloads 162
397 Assessing the Feasibility of Italian Hydrogen Targets with the Open-Source Energy System Optimization Model TEMOA - Italy

Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

Hydrogen is expected to become a game changer in the energy transition, especially enabling sector coupling possibilities and the decarbonization of hard-to-abate end-uses. The Italian National Recovery and Resilience Plan identifies hydrogen as one of the key elements of the ecologic transition to meet international decarbonization objectives, also including it in several pilot projects for the early development in Italy. This matches the European energy strategy, which aims to make hydrogen a leading energy carrier of the future, setting ambitious goals to be accomplished by 2030. The huge efforts needed to achieve the announced targets require to carefully investigate of their feasibility in terms of economic expenditures and technical aspects. In order to quantitatively assess the hydrogen potential within the Italian context and the feasibility of the planned investments and projects, this work uses the TEMOA-Italy energy system model to study pathways to meet the strict objectives above cited. The possible hydrogen development has been studied both in the supply-side and demand-side of the energy system, also including storage options and distribution chains. The assessment comprehends alternative hydrogen production technologies involved in a competition market, reflecting the several possible investments declined by the Italian National Recovery and Resilience Plan to boost the development and spread of this infrastructure, including the sector coupling potential with natural gas through the currently existing infrastructure and CO2 capture for the production of synfuels. On the other hand, the hydrogen end-uses phase covers a wide range of consumption alternatives, from fuel-cell vehicles, for which both road and non-road transport categories are considered, to steel, and chemical industries uses and cogeneration for residential and commercial buildings. The model includes both high and low TRL technologies in order to provide a consistent outcome for the future decades as it does for the present day, and since it is developed through the use of an open-source code instance and database, transparency and accessibility are fully granted.

Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA

Procedia PDF Downloads 78
396 Changing Behaviour in the Digital Era: A Concrete Use Case from the Domain of Health

Authors: Francesca Spagnoli, Shenja van der Graaf, Pieter Ballon

Abstract:

Humans do not behave rationally. We are emotional, easily influenced by others, as well as by our context. The study of human behaviour became a supreme endeavour within many academic disciplines, including economics, sociology, and clinical and social psychology. Understanding what motivates humans and triggers them to perform certain activities, and what it takes to change their behaviour, is central both for researchers and companies, as well as policy makers to implement efficient public policies. While numerous theoretical approaches for diverse domains such as health, retail, environment have been developed, the methodological models guiding the evaluation of such research have reached for a long time their limits. Within this context, digitisation, the Information and communication technologies (ICT) and wearable, the Internet of Things (IoT) connecting networks of devices, and new possibilities to collect and analyse massive amounts of data made it possible to study behaviour from a realistic perspective, as never before. Digital technologies make it possible to (1) capture data in real-life settings, (2) regain control over data by capturing the context of behaviour, and (3) analyse huge set of information through continuous measurement. Within this complex context, this paper describes a new framework for initiating behavioural change, capitalising on the digital developments in applied research projects and applicable both to academia, enterprises and policy makers. By applying this model, behavioural research can be conducted to address the issues of different domains, such as mobility, environment, health or media. The Modular Behavioural Analysis Approach (MBAA) is here described and firstly validated through a concrete use case within the domain of health. The results gathered have proven that disclosing information about health in connection with the use of digital apps for health, can be a leverage for changing behaviour, but it is only a first component requiring further follow-up actions. To this end, a clear definition of different 'behavioural profiles', towards which addressing several typologies of interventions, it is essential to effectively enable behavioural change. In the refined version of the MBAA a strong focus will rely on defining a methodology for shaping 'behavioural profiles' and related interventions, as well as the evaluation of side-effects on the creation of new business models and sustainability plans.

Keywords: behavioural change, framework, health, nudging, sustainability

Procedia PDF Downloads 200
395 Kinetic Studies on CO₂ Gasification of Low and High Ash Indian Coals in Context of Underground Coal Gasification

Authors: Geeta Kumari, Prabu Vairakannu

Abstract:

Underground coal gasification (UCG) technology is an efficient and an economic in-situ clean coal technology, which converts unmineable coals into calorific valuable gases. This technology avoids ash disposal, coal mining, and storage problems. CO₂ gas can be a potential gasifying medium for UCG. CO₂ is a greenhouse gas and, the liberation of this gas to the atmosphere from thermal power plant industries leads to global warming. Hence, the capture and reutilization of CO₂ gas are crucial for clean energy production. However, the reactivity of high ash Indian coals with CO₂ needs to be assessed. In the present study, two varieties of Indian coals (low ash and high ash) are used for thermogravimetric analyses (TGA). Two low ash north east Indian coals (LAC) and a typical high ash Indian coal (HAC) are procured from the coal mines of India. Low ash coal with 9% ash (LAC-1) and 4% ash (LAC-2) and high ash coal (HAC) with 42% ash are used for the study. TGA studies are carried out to evaluate the activation energy for pyrolysis and gasification of coal under N₂ and CO₂ atmosphere. Coats and Redfern method is used to estimate the activation energy of coal under different temperature regimes. Volumetric model is assumed for the estimation of the activation energy. The activation energy estimated under different temperature range. The inherent properties of coals play a major role in their reactivity. The results show that the activation energy decreases with the decrease in the inherent percentage of coal ash due to the ash layer hindrance. A reverse trend was observed with volatile matter. High volatile matter of coal leads to the estimation of low activation energy. It was observed that the activation energy under CO₂ atmosphere at 400-600°C is less as compared to N₂ inert atmosphere. At this temperature range, it is estimated that 15-23% reduction in the activation energy under CO₂ atmosphere. This shows the reactivity of CO₂ gas with higher hydrocarbons of the coal volatile matters. The reactivity of CO₂ with the volatile matter of coal might occur through dry reforming reaction in which CO₂ reacts with higher hydrocarbon, aromatics of the tar content. The observed trend of Ea in the temperature range of 150-200˚C and 400-600˚C is HAC > LAC-1 >LAC-2 in both N₂ and CO₂ atmosphere. At the temperature range of 850-1000˚C, higher activation energy is estimated when compared to those values in the temperature range of 400-600°C. Above 800°C, char gasification through Boudouard reaction progressed under CO₂ atmosphere. It was observed that 8-20 kJ/mol of activation energy is increased during char gasification above 800°C compared to volatile matter pyrolysis between the temperature ranges of 400-600°C. The overall activation energy of the coals in the temperature range of 30-1000˚C is higher in N₂ atmosphere than CO₂ atmosphere. It can be concluded that higher hydrocarbons such as tar effectively undergoes cracking and reforming reactions in presence of CO₂. Thus, CO₂ gas is beneficial for the production of high calorific value syngas using high ash Indian coals.

Keywords: clean coal technology, CO₂ gasification, activation energy, underground coal gasification

Procedia PDF Downloads 148
394 Changes in Amino Acids Content in Muscle of European Eel (Anguilla anguilla) in Relation to Body Size

Authors: L. Gómez-Limia, I. Franco, T. Blanco, S. Martínez

Abstract:

European eels (Anguilla anguilla) belong to Anguilliformes order and Anguillidae family. They are generally classified as warm-water fish. Eels have a great commercial value in Europe and Asian countries. Eels can reach high weights, although their commercial size is relatively low in some countries. The capture of larger eels would facilitate the recovery of the species, as well as having a greater number of either glass eels or elvers for aquaculture. In the last years, the demand and the price of eels have increased significantly. However, European eel is considered critically endangered by the International Union for the Conservation of Nature (IUCN) Red List. The biochemical composition of fishes is an important aspect of quality and affects the nutritional value and consumption quality of fish. In addition, knowing this composition can help predict an individual’s condition for their recovery. Fish is known to be important source of protein rich in essential amino acids. However, there is very little information about changes in amino acids composition of European eels with increase in size. The aim of this study was to evaluate the effect of two different weight categories on the amino acids content in muscle tissue of wild European eels. European eels were caught in River Ulla (Galicia, NW Spain), during winter. The eels were slaughtered in ice water immersion. Then, they were purchased and transferred to the laboratory. The eels were subdivided into two groups, according to the weight. The samples were kept frozen (-20 °C) until their analysis. Frozen eels were defrosted and the white muscle between the head and the anal hole. was extracted, in order to obtain amino acids composition. Thirty eels for each group were used. Liquid chromatography was used for separation and quantification of amino a cids. The results conclude that the eels are rich in glutamic acid, leucine, lysine, threonine, valine, isoleucine and phenylalanine. The analysis showed that there are significant differences (p < 0.05) among the eels with different sizes. Histidine, threonine, lysine, hydroxyproline, serine, glycine, arginine, alanine and proline were higher in small eels. European eels muscle presents between 45 and 46% of essential amino acids in the total amino acids. European eels have a well-balanced and high quality protein source in the respect of E/NE ratio. However, eels with higher weight showed a better ratio of essential and non-essential amino acid.

Keywords: European eels, amino acids, HPLC, body size

Procedia PDF Downloads 83
393 The Enhancement of Target Localization Using Ship-Borne Electro-Optical Stabilized Platform

Authors: Jaehoon Ha, Byungmo Kang, Kilho Hong, Jungsoo Park

Abstract:

Electro-optical (EO) stabilized platforms have been widely used for surveillance and reconnaissance on various types of vehicles, from surface ships to unmanned air vehicles (UAVs). EO stabilized platforms usually consist of an assembly of structure, bearings, and motors called gimbals in which a gyroscope is installed. EO elements such as a CCD camera and IR camera, are mounted to a gimbal, which has a range of motion in elevation and azimuth and can designate and track a target. In addition, a laser range finder (LRF) can be added to the gimbal in order to acquire the precise slant range from the platform to the target. Recently, a versatile functionality of target localization is needed in order to cooperate with the weapon systems that are mounted on the same platform. The target information, such as its location or velocity, needed to be more accurate. The accuracy of the target information depends on diverse component errors and alignment errors of each component. Specially, the type of moving platform can affect the accuracy of the target information. In the case of flying platforms, or UAVs, the target location error can be increased with altitude so it is important to measure altitude as precisely as possible. In the case of surface ships, target location error can be increased with obliqueness of the elevation angle of the gimbal since the altitude of the EO stabilized platform is supposed to be relatively low. The farther the slant ranges from the surface ship to the target, the more extreme the obliqueness of the elevation angle. This can hamper the precise acquisition of the target information. So far, there have been many studies on EO stabilized platforms of flying vehicles. However, few researchers have focused on ship-borne EO stabilized platforms of the surface ship. In this paper, we deal with a target localization method when an EO stabilized platform is located on the mast of a surface ship. Especially, we need to overcome the limitation caused by the obliqueness of the elevation angle of the gimbal. We introduce a well-known approach for target localization using Unscented Kalman Filter (UKF) and present the problem definition showing the above-mentioned limitation. Finally, we want to show the effectiveness of the approach that will be demonstrated through computer simulations.

Keywords: target localization, ship-borne electro-optical stabilized platform, unscented kalman filter

Procedia PDF Downloads 494
392 Analysis of the Operating Load of Gas Bearings in the Gas Generator of the Turbine Engine during a Deceleration to Dash Maneuver

Authors: Zbigniew Czyz, Pawel Magryta, Mateusz Paszko

Abstract:

The paper discusses the status of loads acting on the drive unit of the unmanned helicopter during deceleration to dash maneuver. Special attention was given for the loads of bearings in the gas generator turbine engine, in which will be equipped a helicopter. The analysis was based on the speed changes as a function of time for manned flight of helicopter PZL W3-Falcon. The dependence of speed change during the flight was approximated by the least squares method and then determined for its changes in acceleration. This enabled us to specify the forces acting on the bearing of the gas generator in static and dynamic conditions. Deceleration to dash maneuvers occurs in steady flight at a speed of 222 km/h by horizontal braking and acceleration. When the speed reaches 92 km/h, it dynamically changes an inclination of the helicopter to the maximum acceleration and power to almost maximum and holds it until it reaches its initial speed. This type of maneuvers are used due to ineffective shots at significant cruising speeds. It is, therefore, important to reduce speed to the optimum as soon as possible and after giving a shot to return to the initial speed (cruising). In deceleration to dash maneuvers, we have to deal with the force of gravity of the rotor assembly, gas aerodynamics forces and the forces caused by axial acceleration during this maneuver. While we can assume that the working components of the gas generator are designed so that axial gas forces they create could balance the aerodynamic effects, the remaining ones operate with a value that results from the motion profile of the aircraft. Based on the analysis, we can make a compilation of the results. For this maneuver, the force of gravity (referring to statistical calculations) respectively equals for bearing A = 5.638 N and bearing B = 1.631 N. As overload coefficient k in this direction is 1, this force results solely from the weight of the rotor assembly. For this maneuver, the acceleration in the longitudinal direction achieved value a_max = 4.36 m/s2. Overload coefficient k is, therefore, 0.44. When we multiply overload coefficient k by the weight of all gas generator components that act on the axial bearing, the force caused by axial acceleration during deceleration to dash maneuver equals only 3.15 N. The results of the calculations are compared with other maneuvers such as acceleration and deceleration and jump up and jump down maneuvers. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: gas bearings, helicopters, helicopter maneuvers, turbine engines

Procedia PDF Downloads 313
391 Volume Estimation of Trees: An Exploratory Study on Pterocarpus erinaceus Logging Operations within Forest Transition and Savannah Ecological Zones of Ghana

Authors: Albert Kwabena Osei Konadu

Abstract:

Pterocarpus erinaceus, also known as Rosewood, is tropical wood, endemic in forest savannah transition zones within the middle and northern portion of Ghana. Its economic viability has made it increasingly popular and in high demand, leading to widespread conservation concerns. Ghana’s forest resource management regime for these ecozones is mainly on conservation and very little on resource utilization. Consequently, commercial logging management standards are at teething stage and not fully developed, leading to a deficiency in the monitoring of logging operations and quantification of harvested trees volumes. Tree information form (TIF); a volume estimation and tracking regime, has proven to be an effective, sustainable management tool for regulating timber resource extraction in the high forest zones of the country. This work aims to generate TIF that can track and capture requisite parameters to accurately estimate the volume of harvested rosewood within forest savannah transition zones. Tree information forms were created on three scenarios of individual billets, stacked billets and conveying vessel basis. These TIFs were field-tested to deduce the most viable option for the tracking and estimation of harvested volumes of rosewood using the smallian and cubic volume estimation formula. Overall, four districts were covered with individual billets, stacked billets and conveying vessel scenarios registering mean volumes of 25.83m3,45.08m3 and 32.6m3, respectively. These adduced volumes were validated by benchmarking to assigned volumes of the Forestry Commission of Ghana and known standard volumes of conveying vessels. The results did indicate an underestimation of extracted volumes under the quotas regime, a situation that could lead to unintended overexploitation of the species. The research revealed conveying vessels route is the most viable volume estimation and tracking regime for the sustainable management of the Pterocarpous erinaceus species as it provided a more practical volume estimate and data extraction protocol.

Keywords: convention on international trade in endangered species, cubic volume formula, forest transition savannah zones, pterocarpus erinaceus, smallian’s volume formula, tree information form

Procedia PDF Downloads 64
390 Climate Change Adaptation Strategy Recommended for the Conservation of Biodiversity in Western Ghats, India

Authors: Mukesh Lal Das, Muthukumar Muthuchamy

Abstract:

Climate change Adaptation strategy (AS) is a scientific approach to dealing with the impacts of climate change (CC). Efforts are being made to contain the global emission of greenhouse gas within threshold limits, thereby limiting the rise of global temperature to an optimal level. Global Climate change is a spontaneous process; therefore, reversing the damage would take decades. The climate change adaptation strategy recommended by various stakeholders could be a key to resilience for biodiversity. The Indian Government has constituted the panel to synthesize the climate change action report at the federal and state levels. This review scavenged the published literature on the Western Ghats hotspots. And highlight the adaptation strategy recommended by diverse scientific actors to conserve biodiversity. It also reviews the grey literature adopted by state and federal governments and its effectiveness in mitigating the impacts on biodiversity. We have narrowed the scope of interest to the state action report by 6 Indian states such as Gujarat, Maharashtra, Goa, Karnataka, Kerala and Tamil Nadu, which host Western Ghats global biodiversity hotspot. Western Ghats(WGs) act as the water tower to the peninsular part of India, and its extensive watershed caters to the water demand of the Industry sector, Agriculture and urban community. Conservation of WGs is the key to the prosperity of Peninsular India. The global scientific community suggested more than 600+ Climate change adaptation strategies for the policymakers, stakeholders, and other state actors to take proactive actions. The preliminary analysis of the federal and the state action plan on climate change in the wake of CC indicate inadequacy in motion as per recommended scientific adaptation strategies. Tamil Nadu and Kerala state constitute nine effective adaptation strategies out of the 40+ recommended for Western Ghats conservation. And other four states' adaptation strategies are deficient, confusing and vague. Western Ghats' resilience capacity will soon or might have reached its threshold, and the frequency of severe drought and flash floods might upsurge manifold in the decades to come. The lack of a clear roadmap to climate change adaptation strategies in the federal and state action stirred us to identify the gap and address it by offering a holistic approach to WGs biodiversity conservation.

Keywords: adaptation strategy, biodiversity conservation, climate change, resilience, Western Ghats

Procedia PDF Downloads 85
389 "IS Cybernetics": An Idea to Base the International System Theory upon the General System Theory and Cybernetics

Authors: Petra Suchovska

Abstract:

The spirit of post-modernity remains chaotic and obscure. Geopolitical rivalries raging at the more extreme levels and the ability of intellectual community to explain the entropy of global affairs has been diminishing. The Western-led idea of globalisation imposed upon the world does not seem to bring the bright future for human progress anymore, and its architects lose much of global control, as the strong non-western cultural entities develop new forms of post-modern establishments. The overall growing cultural misunderstanding and mistrust are expressions of political impotence to deal with the inner contradictions within the contemporary phenomenon (capitalism, economic globalisation) that embrace global society. The drivers and effects of global restructuring must be understood in the context of systems and principles reflecting on true complexity of society. The purpose of this paper is to set out some ideas about how cybernetics can contribute to understanding international system structure and analyse possible world futures. “IS Cybernetics” would apply to system thinking and cybernetic principles in IR in order to analyse and handle the complexity of social phenomena from global perspective. “IS cybernetics” would be, for now, the subfield of IR, concerned with applying theories and methodologies from cybernetics and system sciences by offering concepts and tools for addressing problems holistically. It would bring order to the complex relations between disciplines that IR touches upon. One of its tasks would be to map, measure, tackle and find the principles of dynamics and structure of social forces that influence human behaviour and consequently cause political, technological and economic structural reordering, forming and reforming the international system. “IS cyberneticists” task would be to understand the control mechanisms that govern the operation of international society (and the sub-systems in their interconnection) and only then suggest better ways operate these mechanisms on sublevels as cultural, political, technological, religious and other. “IS cybernetics” would also strive to capture the mechanism of social-structural changes in time, which would open space for syntheses between IR and historical sociology. With the cybernetic distinction between first order studies of observed systems and the second order study of observing systems, IS cybernetics would also provide a unifying epistemological and methodological, conceptual framework for multilateralism and multiple modernities theory.

Keywords: cybernetics, historical sociology, international system, systems theory

Procedia PDF Downloads 206
388 Learners’ Perceptions of Tertiary Level Teachers’ Code Switching: A Vietnamese Perspective

Authors: Hoa Pham

Abstract:

The literature on language teaching and second language acquisition has been largely driven by monolingual ideology with a common assumption that a second language (L2) is best taught and learned in the L2 only. The current study challenges this assumption by reporting learners' positive perceptions of tertiary level teachers' code switching practices in Vietnam. The findings of this study contribute to our understanding of code switching practices in language classrooms from a learners' perspective. Data were collected from student participants who were working towards a Bachelor degree in English within the English for Business Communication stream through the use of focus group interviews. The literature has documented that this method of interviewing has a number of distinct advantages over individual student interviews. For instance, group interactions generated by focus groups create a more natural environment than that of an individual interview because they include a range of communicative processes in which each individual may influence or be influenced by others - as they are in their real life. The process of interaction provides the opportunity to obtain the meanings and answers to a problem that are "socially constructed rather than individually created" leading to the capture of real-life data. The distinct feature of group interaction offered by this technique makes it a powerful means of obtaining deeper and richer data than those from individual interviews. The data generated through this study were analysed using a constant comparative approach. Overall, the students expressed positive views of this practice indicating that it is a useful teaching strategy. Teacher code switching was seen as a learning resource and a source supporting language output. This practice was perceived to promote student comprehension and to aid the learning of content and target language knowledge. This practice was also believed to scaffold the students' language production in different contexts. However, the students indicated their preference for teacher code switching to be constrained, as extensive use was believed to negatively impact on their L2 learning and trigger cognitive reliance on the L1 for L2 learning. The students also perceived that when the L1 was used to a great extent, their ability to develop as autonomous learners was negatively impacted. This study found that teacher code switching was supported in certain contexts by learners, thus suggesting that there is a need for the widespread assumption about the monolingual teaching approach to be re-considered.

Keywords: codeswitching, L1 use, L2 teaching, learners’ perception

Procedia PDF Downloads 290
387 Limbic Involvement in Visual Processing

Authors: Deborah Zelinsky

Abstract:

The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.

Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing

Procedia PDF Downloads 57
386 Numerical Investigation on the Influence of Incoming Flow Conditions on the Rotating Stall in Centrifugal Pump

Authors: Wanru Huang, Fujun Wang, Chaoyue Wang, Yuan Tang, Zhifeng Yao, Ruofu Xiao, Xin Chen

Abstract:

Rotating stall in centrifugal pump is an unsteady flow phenomenon that causes instabilities and high hydraulic losses. It typically occurs at low flow rates due to large flow separation in impeller blade passage. In order to reveal the influence of incoming flow conditions on rotating stall in centrifugal pump, a numerical method for investigating rotating stall was established. This method is based on a modified SST k-ω turbulence model and a fine mesh model was adopted. The calculated flow velocity in impeller by this method was in good agreement with PIV results. The effects of flow rate and sealing-ring leakage on stall characteristics of centrifugal pump were studied by using the proposed numerical approach. The flow structures in impeller under typical flow rates and typical sealing-ring leakages were analyzed. It is found that the stall vortex frequency and circumferential propagation velocity increase as flow rate decreases. With the flow rate decreases from 0.40Qd to 0.30Qd, the stall vortex frequency increases from 1.50Hz to 2.34Hz, the circumferential propagation velocity of the stall vortex increases from 3.14rad/s to 4.90rad/s. Under almost all flow rate conditions where rotating stall is present, there is low frequency of pressure pulsation between 0Hz-5Hz. The corresponding pressure pulsation amplitude increases with flow rate decreases. Taking the measuring point at the leading edge of the blade pressure surface as an example, the flow rate decreases from 0.40Qd to 0.30Qd, the pressure fluctuation amplitude increases by 86.9%. With the increase of leakage, the flow structure in the impeller becomes more complex, and the 8-shaped stall vortex is no longer stable. On the basis of the 8-shaped stall vortex, new vortex nuclei are constantly generated and fused with the original vortex nuclei under large leakage. The upstream and downstream vortex structures of the 8-shaped stall vortex have different degrees of swimming in the flow passage, and the downstream vortex swimming is more obvious. The results show that the proposed numerical approach could capture the detail vortex characteristics, and the incoming flow conditions have significant effects on the stall vortex in centrifugal pumps.

Keywords: centrifugal pump, rotating stall, numerical simulation, flow condition, vortex frequency

Procedia PDF Downloads 118
385 Interaction between Trapezoidal Hill and Subsurface Cavity under SH Wave Incidence

Authors: Yuanrui Xu, Zailin Yang, Yunqiu Song, Guanxixi Jiang

Abstract:

It is an important subject of seismology on the influence of local topography on ground motion during earthquake. In mountainous areas with complex terrain, the construction of the tunnel is often the most effective transportation scheme. In these projects, the local terrain can be simplified into hills with different shapes, and the underground tunnel structure can be regarded as a subsurface cavity. The presence of the subsurface cavity affects the strength of the rock mass and changes the deformation and failure characteristics. Moreover, the scattering of the elastic waves by underground structures usually interacts with local terrains, which leads to a significant influence on the surface displacement of the terrains. Therefore, it is of great practical significance to study the surface displacement of local terrains with underground tunnels in earthquake engineering and seismology. In this work, the region is divided into three regions by the method of region matching. By using the fractional Bessel function and Hankel function, the complex function method, and the wave function expansion method, the wavefield expression of SH waves is introduced. With the help of a constitutive relation between the displacement and the stress components, the hoop stress and radial stress is obtained subsequently. Then, utilizing the continuous condition at different region boundaries, the undetermined coefficients in wave fields are solved by the Fourier series expansion and truncation of the finite term. Finally, the validity of the method is verified, and the surface displacement amplitude is calculated. The surface displacement amplitude curve is discussed in the numerical results. The results show that different parameters, such as radius and buried depth of the tunnel, wave number, and incident angle of the SH wave, have a significant influence on the amplitude of surface displacement. For the underground tunnel, the increase of buried depth will make the response of surface displacement amplitude increases at first and then decreases. However, the increase of radius leads the response of surface displacement amplitude to appear an opposite phenomenon. The increase of SH wave number can enlarge the amplitude of surface displacement, and the change of incident angle can obviously affect the amplitude fluctuation.

Keywords: method of region matching, scattering of SH wave, subsurface cavity, trapezoidal hill

Procedia PDF Downloads 112
384 Time and Energy Saving Kitchen Layout

Authors: Poonam Magu, Kumud Khanna, Premavathy Seetharaman

Abstract:

The two important resources of any worker performing any type of work at any workplace are time and energy. These are important inputs of the worker and need to be utilised in the best possible manner. The kitchen is an important workplace where the homemaker performs many essential activities. Its layout should be so designed that optimum use of her resources can be achieved.Ideally, the shape of the kitchen, as determined by the physical space enclosed by the four walls, can be square, rectangular or irregular. But it is the shape of the arrangement of counter that one normally refers to while talking of the layout of the kitchen. The arrangement can be along a single wall, along two opposite walls, L shape, U shape or even island. A study was conducted in 50 kitchens belonging to middle income group families. These were DDA built kitchens located in North, South, East and West Delhi.The study was conducted in three phases. In the first phase, 510 non working homemakers were interviewed. The data related to personal characteristics of the homemakers was collected. Additional information was also collected regarding the kitchens-the size, shape , etc. The homemakers were also questioned about various aspects related to meal preparation-people performing the task, number of items cooked, areas used for meal preparation , etc. In the second phase, a suitable technique was designed for conducting time and motion study in the kitchen while the meal was being prepared. This technique was called Path Process Chart. The final phase was carried out in 50 kitchens. The criterion for selection was that all items for a meal should be cooked at the same time. All the meals were cooked by the homemakers in their own kitchens. The meal preparation was studied using the Path Process Chart technique. The data collected was analysed and conclusions drawn. It was found that of all the shapes, it was the kitchen with L shape arrangement in which, on an average a homemaker spent minimum time on meal preparation and also travelled the minimum distance. Thus, the average distance travelled in a L shaped layout was 131.1 mts as compared to 181.2 mts in an U shaped layout. Similarly, 48 minutes was the average time spent on meal preparation in L shaped layout as compared to 53 minutes in U shaped layout. Thus, the L shaped layout was more time and energy saving layout as compared to U shaped.

Keywords: kitchen layout, meal preparation, path process chart technique, workplace

Procedia PDF Downloads 184
383 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task

Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli

Abstract:

Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.

Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making

Procedia PDF Downloads 228
382 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards

Authors: Hanna Schübel, Ivo Wallimann-Helmer

Abstract:

In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.

Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility

Procedia PDF Downloads 95
381 An Approach for the Capture of Carbon Dioxide via Polymerized Ionic Liquids

Authors: Ghassan Mohammad Alalawi, Abobakr Khidir Ziyada, Abdulmajeed Khan

Abstract:

A potential alternative or next-generation CO₂-selective separation medium that has lately been suggested is ionic liquids (ILs). It is more facile to "tune" the solubility and selectivity of CO₂ in ILs compared to organic solvents via modification of the cation and/or anion structures. Compared to ionic liquids at ambient temperature, polymerized ionic liquids exhibited increased CO₂ sorption capacities and accelerated sorption/desorption rates. This research aims to investigate the correlation between the CO₂ sorption rate and capacity of poly ionic liquids (pILs) and the chemical structure of these substances. The dependency of sorption on the ion conductivity of the pILs' cations and anions is one of the theories we offered to explain the attraction between CO₂ and pILs. This assumption was supported by the Monte Carlo molecular dynamics simulations results, which demonstrated that CO₂ molecules are localized around both cations and anions and that their sorption depends on the cations' and anions' ion conductivities. Polymerized ionic liquids are synthesized to investigate the impact of substituent alkyl chain length, cation, and anion on CO₂ sorption rate and capacity. Three stages are involved in synthesizing the pILs under study: first, trialkyl amine and vinyl benzyl chloride are directly quaternized to obtain the required cation. Next, anion exchange is performed, and finally, the obtained IL is polymerized to form the desired product (pILs). The synthesized pILs' structures were confirmed using elemental analysis and NMR. The synthesized pILs are characterized by examining their structure topology, chloride content, density, and thermal stability using SEM, ion chromatography (using a Metrohm Model 761 Compact IC apparatus), ultrapycnometer, and TGA. As determined by the CO₂ sorption results using a magnetic suspension balance (MSB) apparatus, the sorption capacity of pILs is dependent on the cation and anion ion conductivities. The anion's size also influences the CO₂ sorption rate and capacity. It was discovered that adding water to pILs caused a dramatic, systematic enlargement of pILs resulting in a significant increase in their capacity to absorb CO₂ under identical conditions, contingent on the type of gas, gas flow, applied gas pressure, and water content of the pILs. Along with its capacity to increase surface area through expansion, water also possesses highly high ion conductivity for cations and anions, enhancing its ability to absorb CO₂.

Keywords: polymerized ionic liquids, carbon dioxide, swelling, characterization

Procedia PDF Downloads 33
380 The Impact of Professional Development in the Area of Technology Enhanced Learning on Higher Education Teaching Practices Across Atlantic Technological University – Research Methodology and Preliminary Findings

Authors: Annette Cosgrove

Abstract:

The objectives of this research study is to examine the impact of professional development in Technology Enhanced Learning (TEL) and the digitisation of learning in teaching communities across multiple higher education sites in the ATU (Atlantic Technological University *) ( 2020-2025), including the proposal of an evidence based digital teaching model for use in a future pandemic. The research strategy undertaken for this PhD Study is a multi-site study using mixed methods. Qualitative & quantitative methods are being used in the study to collect data. A pilot study was carried out initially , feedback collected and the research instrument was edited to reflect this feedback, before being administered. The purpose of the staff questionnaire is to evaluate the impact of professional development in the area of TEL, and to capture the practitioners views on the perceived impact on their teaching practice in the higher education sector across ATU (West of Ireland – 5 Higher education locations ). The phenomenon being explored is ‘ the impact of professional development in the area of technology enhanced learning and on teaching practice in a higher education institution.’ The research methodology chosen for this study is an Action based Research Study. The researcher has chosen this approach as it is a prime strategy for developing educational theory and enhancing educational practice . This study includes quantitative and qualitative methods to elicit data which will quantify the impact that continuous professional development in the area of digital teaching practice and technologies has on the practitioner’s teaching practice in higher education. The research instruments / data collection tools for this study include a lecturer survey with a targeted TEL Practice group ( Pre and post covid experience) and semi-structured interviews with lecturers.. This research is currently being conducted across the ATU multisite campus and targeting Higher education lecturers that have completed formal CPD in the area of digital teaching. ATU, a west of Ireland university is the focus of the study , The research questionnaire has been deployed, with 75 respondents to date across the ATU - the primary questionnaire and semi- formal interviews are ongoing currently – the purpose being to evaluate the impact of formal professional development in the area of TEL and its perceived impact on the practitioners teaching practice in the area of digital teaching and learning . This paper will present initial findings, reflections and data from this ongoing research study.

Keywords: TEL, DTL, digital teaching, digital assessment

Procedia PDF Downloads 43
379 Do the Health Benefits of Oil-Led Economic Development Outweigh the Potential Health Harms from Environmental Pollution in Nigeria?

Authors: Marian Emmanuel Okon

Abstract:

Introduction: The Niger Delta region of Nigeria has a vast reserve of oil and gas, which has globally positioned the nation as the sixth largest exporter of crude oil. Production rapidly rose following oil discovery. In most oil producing nations of the world, the wealth generated from oil production and export has propelled economic advancement, enabling the development of industries and other relevant infrastructures. Therefore, it can be assumed that majority of the oil resource such as Nigeria’s, has the potential to improve the health of the population via job creation and derived revenues. However, the health benefits of this economic development might be offset by the environmental consequences of oil exploitation and production. Objective: This research aims to evaluate the balance between the health benefits of oil-led economic development and harmful environmental consequences of crude oil exploitation in Nigeria. Study Design: A pathway has been designed to guide data search and this study. The model created will assess the relationship between oil-led economic development and population health development via job creation, improvement of education, development of infrastructure and other forms of development as well as through harmful environmental consequences from oil activities. Data/Emerging Findings: Diverse potentially suitable datasets which are at different geographical scales have been identified, obtained or applied for and the dataset from the World Bank has been the most thoroughly explored. This large dataset contains information that would enable the longitudinal assessment of both the health benefits and harms from oil exploitation in Nigeria as well as identify the disparities that exist between the communities, states and regions. However, these data do not extend far back enough in time to capture the start of crude oil production. Thus, it is possible that the maximum economic benefits and health harms could be missed. To deal with this shortcoming, the potential for a comparative study with countries like United Kingdom, Morocco and Cote D’ivoire has also been taken into consideration, so as to evaluate the differences between these countries as well as identify the areas of improvement in Nigeria’s environmental and health policies. Notwithstanding, these data have shown some differences in each country’s economic, environmental and health state over time as well as a corresponding summary statistics. Conclusion: In theory, the beneficial effects of oil exploitation to the health of the population may be substantial as large swaths of the ‘wider determinants’ of population heath are influenced by the wealth of a nation. However, if uncontrolled, the consequences from environmental pollution and degradation may outweigh these benefits. Thus, there is a need to address this, in order to improve environmental and population health in Nigeria.

Keywords: environmental pollution, health benefits, oil-led economic development, petroleum exploitation

Procedia PDF Downloads 305
378 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 48
377 Mechanism of Action of New Sustainable Flame Retardant Additives in Polyamide 6,6

Authors: I. Belyamani, M. K. Hassan, J. U. Otaigbe, W. R. Fielding, K. A. Mauritz, J. S. Wiggins, W. L. Jarrett

Abstract:

We have investigated the flame-retardant efficiency of special new phosphate glass (P-glass) compositions having different glass transition temperatures (Tg) on the processing conditions of polyamide 6,6 (PA6,6) and the final hybrid flame retardancy (FR). We have showed that the low Tg P glass composition (i.e., ILT 1) is a promising flame retardant for PA6,6 at a concentration of up to 15 wt. % compared to intermediate (IIT 3) and high (IHT 1) Tg P glasses. Cone calorimetry data showed that the ILT 1 decreased both the peak heat release rate and the total heat amount released from the PA6,6/ILT 1 hybrids, resulting in an efficient formation of a glassy char layer. These intriguing findings prompted to address several questions concerning the mechanism of action of the different P glasses studied. The general mechanism of action of phosphorous based FR additives occurs during the combustion stage by enhancing the morphology of the char and the thermal shielding effect. However, the present work shows that P glass based FR additives act during melt processing of PA6,6/P glass hybrids. Dynamic mechanical analysis (DMA) revealed that the Tg of PA6,6/ILT 1 was significantly shifted to a lower Tg (~65 oC) and another transition appeared at high temperature (~ 166 oC), thus indicating a strong interaction between PA6,6 and ILT 1. This was supported by a drop in the melting point and crystallinity of the PA6,6/ILT 1 hybrid material as detected by differential scanning calorimetry (DSC). The dielectric spectroscopic investigation of the networks’ molecular level structural variations (i.e. hybrids chain motion, Tg and sub-Tg relaxations) agreed very well with the DMA and DSC findings; it was found that the three different P glass compositions did not show any effect on the PA6,6 sub-Tg relaxations (related to the NH2 and OH chain end groups motions). Nevertheless, contrary to IIT 3 and IHT 1 based hybrids, the PA6,6/ILT 1 hybrid material showed an evidence of splitting the PA6,6 Tg relaxations into two peaks. Finally, the CPMAS 31P-NMR data confirmed the miscibility between ILT 1 and PA6,6 at the molecular level, as a much larger enhancement in cross-polarization for the PA6,6/15%ILT 1 hybrids was observed. It can be concluded that compounding low Tg P-glass (i.e. ILT 1) with PA6,6 facilitates hydrolytic chain scission of the PA6,6 macromolecules through a potential chemical interaction between phosphate and the alpha-Carbon of the amide bonds of the PA6,6, leading to better flame retardant properties.

Keywords: broadband dielectric spectroscopy, composites, flame retardant, polyamide, phosphate glass, sustainable

Procedia PDF Downloads 209
376 Multiaxial Fatigue in Thermal Elastohydrodynamic Lubricated Contacts with Asperities and Slip

Authors: Carl-Magnus Everitt, Bo Alfredsson

Abstract:

Contact mechanics and tribology have been combined with fundamental fatigue and fracture mechanics to form the asperity mechanism which supplies an explanation for the surface-initiated rolling contact fatigue damage, called pitting or spalling. The cracks causing the pits initiates at one surface point and thereafter they slowly grow into the material before chipping of a material piece to form the pit. In the current study, the lubrication aspects on fatigue initiation are simulated by passing a single asperity through a thermal elastohydrodynamic lubricated, TEHL, contact. The physics of the lubricant was described with Reynolds equation and the lubricants pressure-viscosity relation was modeled by Roelands equation, formulated to include temperature dependence. A pressure dependent shear limit was incorporated. To capture the full phenomena of the sliding contact the temperature field was resolved through the incorporation of the energy flow. The heat was mainly generated due to shearing of the lubricant and from dry friction where metal contact occurred. The heat was then transported, and conducted, away by the solids and the lubricant. The fatigue damage caused by the asperities was evaluated through Findley’s fatigue criterion. The results show that asperities, in the size of surface roughness found in applications, may cause surface initiated fatigue damage and crack initiation. The simulations also show that the asperities broke through the lubricant in the inlet, causing metal to metal contact with high friction. When the asperities thereafter moved through the contact, the sliding provided the asperities with lubricant releasing the metal contact. The release of metal contact was possible due to the high viscosity the lubricant obtained from the high pressure. The metal contact in the inlet caused higher friction which increased the risk of fatigue damage. Since the metal contact occurred in the inlet it increased the fatigue risk more for asperities subjected to negative slip than positive slip. Therefore the fatigue evaluations showed that the asperities subjected to negative slip yielded higher fatigue stresses than the asperities subjected to positive slip of equal magnitude. This is one explanation for why pitting is more common in the dedendum than the addendum on pinion gear teeth. The simulations produced further validation for the asperity mechanism by showing that asperities cause surface initiated fatigue and crack initiation.

Keywords: fatigue, rolling, sliding, thermal elastohydrodynamic

Procedia PDF Downloads 101
375 Construction of Graph Signal Modulations via Graph Fourier Transform and Its Applications

Authors: Xianwei Zheng, Yuan Yan Tang

Abstract:

Classical window Fourier transform has been widely used in signal processing, image processing, machine learning and pattern recognition. The related Gabor transform is powerful enough to capture the texture information of any given dataset. Recently, in the emerging field of graph signal processing, researchers devoting themselves to develop a graph signal processing theory to handle the so-called graph signals. Among the new developing theory, windowed graph Fourier transform has been constructed to establish a time-frequency analysis framework of graph signals. The windowed graph Fourier transform is defined by using the translation and modulation operators of graph signals, following the similar calculations in classical windowed Fourier transform. Specifically, the translation and modulation operators of graph signals are defined by using the Laplacian eigenvectors as follows. For a given graph signal, its translation is defined by a similar manner as its definition in classical signal processing. Specifically, the translation operator can be defined by using the Fourier atoms; the graph signal translation is defined similarly by using the Laplacian eigenvectors. The modulation of the graph can also be established by using the Laplacian eigenvectors. The windowed graph Fourier transform based on these two operators has been applied to obtain time-frequency representations of graph signals. Fundamentally, the modulation operator is defined similarly to the classical modulation by multiplying a graph signal with the entries in each Fourier atom. However, a single Laplacian eigenvector entry cannot play a similar role as the Fourier atom. This definition ignored the relationship between the translation and modulation operators. In this paper, a new definition of the modulation operator is proposed and thus another time-frequency framework for graph signal is constructed. Specifically, the relationship between the translation and modulation operations can be established by the Fourier transform. Specifically, for any signal, the Fourier transform of its translation is the modulation of its Fourier transform. Thus, the modulation of any signal can be defined as the inverse Fourier transform of the translation of its Fourier transform. Therefore, similarly, the graph modulation of any graph signal can be defined as the inverse graph Fourier transform of the translation of its graph Fourier. The novel definition of the graph modulation operator established a relationship of the translation and modulation operations. The new modulation operation and the original translation operation are applied to construct a new framework of graph signal time-frequency analysis. Furthermore, a windowed graph Fourier frame theory is developed. Necessary and sufficient conditions for constructing windowed graph Fourier frames, tight frames and dual frames are presented in this paper. The novel graph signal time-frequency analysis framework is applied to signals defined on well-known graphs, e.g. Minnesota road graph and random graphs. Experimental results show that the novel framework captures new features of graph signals.

Keywords: graph signals, windowed graph Fourier transform, windowed graph Fourier frames, vertex frequency analysis

Procedia PDF Downloads 315
374 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka

Authors: Sakshi Dhumale, Madhushree C., Amba Shetty

Abstract:

The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.

Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability

Procedia PDF Downloads 26
373 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 48
372 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People

Authors: James Boylan, Denny Meyer, Won Sun Chen

Abstract:

Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.

Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire

Procedia PDF Downloads 83
371 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties

Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret

Abstract:

Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.

Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker

Procedia PDF Downloads 92
370 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 305