Search results for: real time digital simulator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22645

Search results for: real time digital simulator

18235 Detection of Atrial Fibrillation Using Wearables via Attentional Two-Stream Heterogeneous Networks

Authors: Huawei Bai, Jianguo Yao, Fellow, IEEE

Abstract:

Atrial fibrillation (AF) is the most common form of heart arrhythmia and is closely associated with mortality and morbidity in heart failure, stroke, and coronary artery disease. The development of single spot optical sensors enables widespread photoplethysmography (PPG) screening, especially for AF, since it represents a more convenient and noninvasive approach. To our knowledge, most existing studies based on public and unbalanced datasets can barely handle the multiple noises sources in the real world and, also, lack interpretability. In this paper, we construct a large- scale PPG dataset using measurements collected from PPG wrist- watch devices worn by volunteers and propose an attention-based two-stream heterogeneous neural network (TSHNN). The first stream is a hybrid neural network consisting of a three-layer one-dimensional convolutional neural network (1D-CNN) and two-layer attention- based bidirectional long short-term memory (Bi-LSTM) network to learn representations from temporally sampled signals. The second stream extracts latent representations from the PPG time-frequency spectrogram using a five-layer CNN. The outputs from both streams are fed into a fusion layer for the outcome. Visualization of the attention weights learned demonstrates the effectiveness of the attention mechanism against noise. The experimental results show that the TSHNN outperforms all the competitive baseline approaches and with 98.09% accuracy, achieves state-of-the-art performance.

Keywords: PPG wearables, atrial fibrillation, feature fusion, attention mechanism, hyber network

Procedia PDF Downloads 114
18234 Effect of Feed Supplement Optipartum C+ 200 (Alfa- Amylase and Beta-Glucanase) in In-Line Rumination Parameters

Authors: Ramūnas Antanaitis, Lina Anskienė, Robertas Stoškus

Abstract:

This study was conducted during 2021.05.01 – 2021.08.31 at the Lithuanian University of health sciences and one Lithuanian dairy farm with 500 dairy cows (55.911381565736, 21.881321760608195). Average calving – 50 cows per month. Cows (n=20) in the treatment group (TG) were fed with feed supplement Optipartum C+ 200 (Enzymes: Alfa- Amylase 57 Units; Beta-Glucanase 107 Units) from 21 days before calving till 30 days after calving with feeding rate 200g/cow/day. Cows in the control group (CG) were fed a feed ration without feed supplement. Measurements started from 6 days before calving and continued till 21 days after calving. The following indicators were registered: with the RumiWatch System: Rumination time; Eating time; Drinking time; Rumination chews; Eating chews; Drinking gulps; Bolus; Chews per minute; Chews per bolus. With SmaXtec system - the temperature, pH of the contents of cows' reticulorumens and cows' activity. According to our results, we found that feeding of cows, from 21 days before calving to 30 days after calving, with a feed supplement with alfa- amylase and beta-glucanase (Optipartum C+ 200) (with dose 200g/cow/day) can produce an increase in: 9% rumination time and eating time, 19% drinking time, 11% rumination chews, 16% eating chews,13% number of boluses per rumination, 5% chews per minute and 16% chews per bolus. We found 1.28 % lower reiticulorumen pH and 0.64% lower reticulorumen temperature in cows fed with the supplement compared with control group cows. Also, cows feeding with enzymes were 8.80% more active.

Keywords: Alfa-Amylase, Beta-Glucanase, cows, in-line, sensors

Procedia PDF Downloads 318
18233 Localization of Pyrolysis and Burning of Ground Forest Fires

Authors: Pavel A. Strizhak, Geniy V. Kuznetsov, Ivan S. Voytkov, Dmitri V. Antonov

Abstract:

This paper presents the results of experiments carried out at a specialized test site for establishing macroscopic patterns of heat and mass transfer processes at localizing model combustion sources of ground forest fires with the use of barrier lines in the form of a wetted lay of material in front of the zone of flame burning and thermal decomposition. The experiments were performed using needles, leaves, twigs, and mixtures thereof. The dimensions of the model combustion source and the ranges of heat release correspond well to the real conditions of ground forest fires. The main attention is paid to the complex analysis of the effect of dispersion of water aerosol (concentration and size of droplets) used to form the barrier line. It is shown that effective conditions for localization and subsequent suppression of flame combustion and thermal decomposition of forest fuel can be achieved by creating a group of barrier lines with different wetting width and depth of the material. Relative indicators of the effectiveness of one and combined barrier lines were established, taking into account all the main characteristics of the processes of suppressing burning and thermal decomposition of forest combustible materials. We performed the prediction of the necessary and sufficient parameters of barrier lines (water volume, width, and depth of the wetted lay of the material, specific irrigation density) for combustion sources with different dimensions, corresponding to the real fire extinguishing practice.

Keywords: forest fire, barrier water lines, pyrolysis front, flame front

Procedia PDF Downloads 128
18232 Theoretical Appraisal of Satisfactory Decision: Uncertainty, Evolutionary Ideas and Beliefs, Satisfactory Time Use

Authors: Okay Gunes

Abstract:

Unsatisfactory experiences due to an information shortage regarding the future pay-offs of actual choices, yield satisficing decision-making. This research will examine, for the first time in the literature, the motivation behind suboptimal decisions due to uncertainty by subjecting Adam Smith’s and Jeremy Bentham’s assumptions about the nature of the actions that lead to satisficing behavior, in order to clarify the theoretical background of a “consumption-based satisfactory time” concept. The contribution of this paper with respect to the existing literature is threefold: Firstly, it is showed in this paper that Adam Smith’s uncertainty is related to the problem of the constancy of ideas and not related directly to beliefs. Secondly, possessions, as in Jeremy Bentham’s oeuvre, are assumed to be just as pleasing, as protecting and improving the actual or expected quality of life, so long as they reduce any displeasure due to the undesired outcomes of uncertainty. Finally, each consumption decision incurs its own satisfactory time period, owed to not feeling hungry, being healthy, not having transportation…etc. This reveals that the level of satisfaction is indeed a behavioral phenomenon where its value would depend on the simultaneous satisfaction derived from all activities.

Keywords: decision-making, idea and belief, satisficing, uncertainty

Procedia PDF Downloads 278
18231 Part Performance Improvement through Design Optimisation of Cooling Channels in the Injection Moulding Process

Authors: M. A. Alhubail, A. I. Alateyah, D. Alenezi, B. Aldousiri

Abstract:

In this study conformal cooling channel (CCC) was employed to dissipate heat of, Polypropylene (PP) parts injected into the Stereolithography (SLA) insert to form tensile and flexural test specimens. The direct metal laser sintering (DMLS) process was used to fabricate a mould with optimised CCC, while optimum parameters of injection moulding were obtained using Optimal-D. The obtained results show that optimisation of the cooling channel layout using a DMLS mould has significantly shortened cycle time without sacrificing the part’s mechanical properties. By applying conformal cooling channels, the cooling time phase was reduced by 20 seconds, and also defected parts were eliminated.

Keywords: optimum parameters, injection moulding, conformal cooling channels, cycle time

Procedia PDF Downloads 221
18230 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 188
18229 SISSLE in Consensus-Based Ripple: Some Improvements in Speed, Security, Last Mile Connectivity and Ease of Use

Authors: Mayank Mundhra, Chester Rebeiro

Abstract:

Cryptocurrencies are rapidly finding wide application in areas such as Real Time Gross Settlements and Payments Systems. Ripple is a cryptocurrency that has gained prominence with banks and payment providers. It solves the Byzantine General’s Problem with its Ripple Protocol Consensus Algorithm (RPCA), where each server maintains a list of servers, called Unique Node List (UNL) that represents the network for the server, and will not collectively defraud it. The server believes that the network has come to a consensus when members of the UNL come to a consensus on a transaction. In this paper we improve Ripple to achieve better speed, security, last mile connectivity and ease of use. We implement guidelines and automated systems for building and maintaining UNLs for resilience, robustness, improved security, and efficient information propagation. We enhance the system so as to ensure that each server receives information from across the whole network rather than just from the UNL members. We also introduce the paradigm of UNL overlap as a function of information propagation and the trust a server assigns to its own UNL. Our design not only reduces vulnerabilities such as eclipse attacks, but also makes it easier to identify malicious behaviour and entities attempting to fraudulently Double Spend or stall the system. We provide experimental evidence of the benefits of our approach over the current Ripple scheme. We observe ≥ 4.97x and 98.22x in speedup and success rate for information propagation respectively, and ≥ 3.16x and 51.70x in speedup and success rate in consensus.

Keywords: Ripple, Kelips, unique node list, consensus, information propagation

Procedia PDF Downloads 139
18228 Developing Cyber Security Asset Mangement Framework for UK Rail

Authors: Shruti Kohli

Abstract:

The sophistication and pervasiveness of cyber-attacks are constantly growing, driven partly by technological progress, profitable applications in organized crime and state-sponsored innovation. The modernization of rail control systems has resulted in an increasing reliance on digital technology and increased the potential for security breaches and cyber-attacks. This research track showcases the need for developing a secure reusable scalable framework for enhancing cyber security of rail assets. A cyber security framework has been proposed that is being developed to detect the tell-tale signs of cyber-attacks against industrial assets.

Keywords: cyber security, rail asset, security threat, cyber ontology

Procedia PDF Downloads 425
18227 Communication of Expected Survival Time to Cancer Patients: How It Is Done and How It Should Be Done

Authors: Geir Kirkebøen

Abstract:

Most patients with serious diagnoses want to know their prognosis, in particular their expected survival time. As part of the informed consent process, physicians are legally obligated to communicate such information to patients. However, there is no established (evidence based) ‘best practice’ for how to do this. The two questions explored in this study are: How do physicians communicate expected survival time to patients, and how should it be done? We explored the first, descriptive question in a study with Norwegian oncologists as participants. The study had a scenario and a survey part. In the scenario part, the doctors should imagine that a patient, recently diagnosed with a serious cancer diagnosis, has asked them: ‘How long can I expect to live with such a diagnosis? I want an honest answer from you!’ The doctors should assume that the diagnosis is certain, and that from an extensive recent study they had optimal statistical knowledge, described in detail as a right-skewed survival curve, about how long such patients with this kind of diagnosis could be expected to live. The main finding was that very few of the oncologists would explain to the patient the variation in survival time as described by the survival curve. The majority would not give the patient an answer at all. Of those who gave an answer, the typical answer was that survival time varies a lot, that it is hard to say in a specific case, that we will come back to it later etc. The survey part of the study clearly indicates that the main reason why the oncologists would not deliver the mortality prognosis was discomfort with its uncertainty. The scenario part of the study confirmed this finding. The majority of the oncologists explicitly used the uncertainty, the variation in survival time, as a reason to not give the patient an answer. Many studies show that patients want realistic information about their mortality prognosis, and that they should be given hope. The question then is how to communicate the uncertainty of the prognosis in a realistic and optimistic – hopeful – way. Based on psychological research, our hypothesis is that the best way to do this is by explicitly describing the variation in survival time, the (usually) right skewed survival curve of the prognosis, and emphasize to the patient the (small) possibility of being a ‘lucky outlier’. We tested this hypothesis in two scenario studies with lay people as participants. The data clearly show that people prefer to receive expected survival time as a median value together with explicit information about the survival curve’s right skewedness (e.g., concrete examples of ‘positive outliers’), and that communicating expected survival time this way not only provides people with hope, but also gives them a more realistic understanding compared with the typical way expected survival time is communicated. Our data indicate that it is not the existence of the uncertainty regarding the mortality prognosis that is the problem for patients, but how this uncertainty is, or is not, communicated and explained.

Keywords: cancer patients, decision psychology, doctor-patient communication, mortality prognosis

Procedia PDF Downloads 322
18226 A Simple Recursive Framework to Generate Gray Codes for Weak Orders in Constant Amortized Time

Authors: Marsden Jacques, Dennis Wong

Abstract:

A weak order is a way to rank n objects where ties are allowed. In this talk, we present a recursive framework to generate Gray codes for weak orders. We then describe a simple algorithm based on the framework that generates 2-Gray codes for weak orders in constant amortized time per string. This framework can easily be modified to generate other Gray codes for weak orders. We provide an example on using the framework to generate the first Shift Gray code for weak orders, also in constant amortized time, where consecutive strings differ by a shift or a symbol change.

Keywords: weak order, Cayley permutation, Gray code, shift Gray code

Procedia PDF Downloads 167
18225 Comfort in Green: Thermal Performance and Comfort Analysis of Sky Garden, SM City, North EDSA, Philippines

Authors: Raul Chavez Jr.

Abstract:

Green roof's body of knowledge appears to be in its infancy stage in the Philippines. To contribute to its development, this study intends to answer the question: Does the existing green roof in Metro Manila perform well in providing thermal comfort and satisfaction to users? Relatively, this study focuses on thermal sensation and satisfaction of users, surface temperature comparison, weather data comparison of the site (Sky Garden) and local weather station (PAG-ASA), and its thermal resistance capacity. Initially, the researcher conducted a point-in-time survey in parallel with weather data gathering from PAG-ASA and Sky Garden. In line with these, ambient and surface temperature are conducted through the use of a digital anemometer, with humidity and temperature, and non-contact infrared thermometer respectively. Furthermore, to determine the Sky Garden's overall thermal resistance, materials found on site were identified and tabulated based on specified locations. It revealed that the Sky Garden can be considered comfortable based from PMV-PPD Model of ASHRAE Standard 55 having similar results from thermal comfort and thermal satisfaction survey, which is contrary to the actual condition of the Sky Garden by means of a psychrometric chart which falls beyond the contextualized comfort zone. In addition, ground floor benefited the most in terms of lower average ambient temperature and humidity compared to the Sky Garden. Lastly, surface temperature data indicates that the green roof portion obtained the highest average temperature yet performed well in terms of heat resistance compared to other locations. These results provided the researcher valuable baseline information of the actual performance of a certain green roof in Metro Manila that could be vital in locally enhancing the system even further and for future studies.

Keywords: Green Roof, Thermal Analysis, Thermal Comfort, Thermal Performance

Procedia PDF Downloads 161
18224 Time-Domain Expressions for Bridge Self-Excited Aerodynamic Forces by Modified Particle Swarm Optimizer

Authors: Hao-Su Liu, Jun-Qing Lei

Abstract:

This study introduces the theory of modified particle swarm optimizer and its application in time-domain expressions for bridge self-excited aerodynamic forces. Based on the indicial function expression and the rational function expression in time-domain expression for bridge self-excited aerodynamic forces, the characteristics of the two methods, i.e. the modified particle swarm optimizer and conventional search method, are compared in flutter derivatives’ fitting process. Theoretical analysis and numerical results indicate that adopting whether the indicial function expression or the rational function expression, the fitting flutter derivatives obtained by modified particle swarm optimizer have better goodness of fit with ones obtained from experiment. As to the flutter derivatives which have higher nonlinearity, the self-excited aerodynamic forces, using the flutter derivatives obtained through modified particle swarm optimizer fitting process, are much closer to the ones simulated by the experimental. The modified particle swarm optimizer was used to recognize the parameters of time-domain expressions for flutter derivatives of an actual long-span highway-railway truss bridge with double decks at the wind attack angle of 0°, -3° and +3°. It was found that this method could solve the bounded problems of attenuation coefficient effectively in conventional search method, and had the ability of searching in unboundedly area. Accordingly, this study provides a method for engineering industry to frequently and efficiently obtain the time-domain expressions for bridge self-excited aerodynamic forces.

Keywords: time-domain expressions, bridge self-excited aerodynamic forces, modified particle swarm optimizer, long-span highway-railway truss bridge

Procedia PDF Downloads 310
18223 Research and Implementation of Cross-domain Data Sharing System in Net-centric Environment

Authors: Xiaoqing Wang, Jianjian Zong, Li Li, Yanxing Zheng, Jinrong Tong, Mao Zhan

Abstract:

With the rapid development of network and communication technology, a great deal of data has been generated in different domains of a network. These data show a trend of increasing scale and more complex structure. Therefore, an effective and flexible cross-domain data-sharing system is needed. The Cross-domain Data Sharing System(CDSS) in a net-centric environment is composed of three sub-systems. The data distribution sub-system provides data exchange service through publish-subscribe technology that supports asynchronism and multi-to-multi communication, which adapts to the needs of the dynamic and large-scale distributed computing environment. The access control sub-system adopts Attribute-Based Access Control(ABAC) technology to uniformly model various data attributes such as subject, object, permission and environment, which effectively monitors the activities of users accessing resources and ensures that legitimate users get effective access control rights within a legal time. The cross-domain access security negotiation subsystem automatically determines the access rights between different security domains in the process of interactive disclosure of digital certificates and access control policies through trust policy management and negotiation algorithms, which provides an effective means for cross-domain trust relationship establishment and access control in a distributed environment. The CDSS’s asynchronous,multi-to-multi and loosely-coupled communication features can adapt well to data exchange and sharing in dynamic, distributed and large-scale network environments. Next, we will give CDSS new features to support the mobile computing environment.

Keywords: data sharing, cross-domain, data exchange, publish-subscribe

Procedia PDF Downloads 120
18222 Congestion Mitigation on an Urban Arterial through Infrastructure Intervention

Authors: Attiq Ur Rahman Dogar, Sohaib Ishaq

Abstract:

Pakistan had experienced rapid motorization in the last decade. Due to the soft leasing schemes of banks and increase in average household income, even the middle class can now afford cars. The public transit system is inadequate and sparse. Due to these reasons, traffic demand on urban arterials has increased manifold. Poor urban transit planning and aging transportation systems have resulted in traffic congestion. The focus of this study is to improve traffic flow on a section of N-5 passing through the Rawalpindi downtown. Present efforts aim to carry out the analysis of traffic conditions on this section and to investigate the impact of traffic signal co-ordination on travel time. In addition to signal co-ordination, we also examined the effect of different infrastructure improvements on the travel time. After the economic analysis of alternatives and discussions, the improvement plan for Rawalpindi downtown urban arterial section is proposed for implementation.

Keywords: signal coordination, infrastructure intervention, infrastructure improvement, cycle length, fuel consumption cost, travel time cost, economic analysis, travel time, Rawalpindi, Pakistan, traffic signals

Procedia PDF Downloads 310
18221 Competition Between the Effects of Pesticides and Immune-activation on the Expression of Toll Pathway Genes

Authors: Dani Sukkar, Ali Kanso, Philippe Laval-Gilly, Jairo Falla-Angel

Abstract:

The honeybees' immune system is challenged by different risk factors that induce various responses. However, complex scenarios where bees are exposed to different pesticides simultaneously with immune activation are not well evaluated. The Toll pathway is one of the main signaling pathways studied in invertebrate immune responses, and it is a good indicator of the effect of such complex interactions in addition to key signaling elements of other pathways like Relish of the immune deficiency (IMD) pathway or Eater, the phagocytosis receptor or vitellogenin levels. Honeybee hemocytes extracted from 5th instar larvae were exposed to imidacloprid and/or amitraz with or without the presence of the zymosan a as an immune activator. The gene expression of multiple immune related genes were studied, including spaetzle, Toll, myD88, relish, eater and vitellogenin, by real-time polymerase chain reaction after RNA extraction. The results demonstrated that the Toll pathway is mainly affected by the pesticides; imidacloprid and amitraz, especially by their different combinations. Furthermore, immune activation by zymosan A, a fungal cell-wall component, acts to mitigate to some extent the effect of pesticides on the different levels of the Toll pathway. In addition, imidacloprid, amitraz, and zymosan A have complex and context-specific interactions depending on the levels of immune activation and the pathway evaluated affecting immune-gene expression differently.

Keywords: toll pathway, immune modulation, β-glucan, imidacloprid, amitraz, honeybees, immune genes

Procedia PDF Downloads 76
18220 Track and Trace Solution on Land Certificate Production: Indonesian Land Certificate

Authors: Adrian Rifqi, Febe Napitupulu, Erdi Hermawan, Edwin Putra, Yang Leprilian

Abstract:

This article focuses on the implementation of the production improvement process of the Indonesian land certificate product that printed in Perum Peruri as the state-owned enterprises. Based on the data obtained, there are several complaints from customers of the 2019 land certificate production. The complaints become a negative value to loyal customers of Perum Peruri. Almost all the complaints are referring to ‘defective printouts and the difference between products in packaging and packaging labels both in terms of type and quantity’. To overcome this problem, we intend to make an improvement to the production process that focuses on complaints ‘there is a difference between products in packaging with packaging labels’. Improvements in the land certificate production process are relying on the technology of the scales and QR code on the packaging label. In addition, using the QR code on the packaging label will facilitate the process of tracking product data. With this method, we hope to reduce the error rate between products in packaging with the packaging label both in terms of quantity, type, and product number on the land certificate and error rate of sending land certificates, which will be sent to many places to 0%. With this solution, we also hope to get precise data and real-time reports on the production of land certificates in the near future, so track and trace implementation can be done as the solution of the land certificate production.

Keywords: land certificates, QR code, track and trace, packaging

Procedia PDF Downloads 153
18219 Differential Approach to Technology Aided English Language Teaching: A Case Study in a Multilingual Setting

Authors: Sweta Sinha

Abstract:

Rapid evolution of technology has changed language pedagogy as well as perspectives on language use, leading to strategic changes in discourse studies. We are now firmly embedded in a time when digital technologies have become an integral part of our daily lives. This has led to generalized approaches to English Language Teaching (ELT) which has raised two-pronged concerns in linguistically diverse settings: a) the diverse linguistic background of the learner might interfere/ intervene with the learning process and b) the differential level of already acquired knowledge of target language might make the classroom practices too easy or too difficult for the target group of learners. ELT needs a more systematic and differential pedagogical approach for greater efficiency and accuracy. The present research analyses the need of identifying learner groups based on different levels of target language proficiency based on a longitudinal study done on 150 undergraduate students. The learners were divided into five groups based on their performance on a twenty point scale in Listening Speaking Reading and Writing (LSRW). The groups were then subjected to varying durations of technology aided language learning sessions and their performance was recorded again on the same scale. Identifying groups and introducing differential teaching and learning strategies led to better results compared to generalized teaching strategies. Language teaching includes different aspects: the organizational, the technological, the sociological, the psychological, the pedagogical and the linguistic. And a facilitator must account for all these aspects in a carefully devised differential approach meeting the challenge of learner diversity. Apart from the justification of the formation of differential groups the paper attempts to devise framework to account for all these aspects in order to make ELT in multilingual setting much more effective.

Keywords: differential groups, English language teaching, language pedagogy, multilingualism, technology aided language learning

Procedia PDF Downloads 388
18218 Impact of Global Warming on the Total Flood Duration and Flood Recession Time in the Meghna Basin Using Hydrodynamic Modelling

Authors: Karan Gupta

Abstract:

The floods cause huge loos each year, and their impact gets manifold with the increase of total duration of flood as well as recession time. Moreover, floods have increased in recent years due to climate change in floodplains. In the context of global climate change, the agreement in Paris convention (2015) stated to keep the increase in global average temperature well below 2°C and keep it at the limit of 1.5°C. Thus, this study investigates the impact of increasing temperature on the stage, discharge as well as total flood duration and recession time in the Meghna River basin in Bangladesh. This study considers the 100-year return period flood flows in the Meghna river under the specific warming levels (SWLs) of 1.5°C, 2°C, and 4°C. The results showed that the rate of increase of duration of flood is nearly 50% lesser at ∆T = 1.5°C as compared to ∆T = 2°C, whereas the rate of increase of duration of recession is 75% lower at ∆T = 1.5°C as compared to ∆T = 2°C. Understanding the change of total duration of flood as well as recession time of the flood gives a better insight to effectively plan for flood mitigation measures.

Keywords: flood, climate change, Paris convention, Bangladesh, inundation duration, recession duration

Procedia PDF Downloads 135
18217 Inventory Control for Purchased Part under Long Lead Time and Uncertain Demand: MRP vs Demand-Driven MRP Approach

Authors: M. J. Shofa, A. Hidayatno, O. M. Armand

Abstract:

MRP as a production control system is appropriate for the deterministic environment. Unfortunately, most production systems such as customer demands are stochastic. Demand-Driven MRP (DDMRP) is a new approach for inventory control system, and it deals with demand uncertainty. The objective of this paper is to compare the MRP and DDMRP work for a long lead time and uncertain demand in terms of on-hand inventory levels. The evaluation is conducted through a discrete event simulation using purchased part data from an automotive company. The result is MRP gives 50,759 pcs / day while DDMRP gives 34,835 pcs / day (reduce 32%), it means DDMRP is more effective inventory control than MRP in terms of on-hand inventory levels.

Keywords: Demand-Driven MRP, long lead time, MRP, uncertain demand

Procedia PDF Downloads 298
18216 The Effect of Naringenin on the Apoptosis in T47D Cell Line of Breast Cancer

Authors: AliAkbar Hafezi, Jahanbakhsh Asadi, Majid Shahbazi, Alijan Tabarraei, Nader Mansour Samaei, Hamed Sheibak, Roghaye Gharaei

Abstract:

Background: Breast cancer is the most common cancer in women. In most cancer cells, apoptosis is blocked. As for the importance of apoptosis in cancer cell death and the role of different genes in its induction or inhibition, the search for compounds that can begin the process of apoptosis in tumor cells is discussed as a new strategy in anticancer drug discovery. The aim of this study was to investigate the effect of Naringenin (NGEN) on the apoptosis in the T47D cell line of breast cancer. Materials and Methods: In this experimental study in vitro, the T47D cell line of breast cancer was selected as a sample. The cells at 24, 48, and 72 hours were treated with doses of 20, 200, and 1000 µm of Naringenin. Then, the transcription levels of the genes involved in apoptosis, including Bcl-2, Bax, Caspase 3, Caspase 8, Caspase 9, P53, PARP-1, and FAS, were assessed using Real Time-PCR. The collected data were analyzed using IBM SPSS Statistics 24.0. Results: The results showed that Naringenin at doses of 20, 200, and 1000 µm in all three times of 24, 48, and 72 hours increased the expression of Caspase 3, P53, PARP-1 and FAS and reduced the expression of Bcl-2 and increased the Bax/Bcl-2 ratio, nevertheless in none of the studied doses and times, had not a significant effect on the expression of Bax, Caspase 8 and Caspase 9. Conclusion: This study indicates that Naringenin can reduce the growth of some cancer cells and cause their deaths through increased apoptosis and decreased anti-apoptotic Bcl-2 gene expression and, resulting in the induction of apoptosis via both internal and external pathways.

Keywords: apoptosis, breast cancer, naringenin, T47D cell line

Procedia PDF Downloads 46
18215 The Construction of Multilingual Online Gaming Community

Authors: Dina Alnefaie

Abstract:

This poster presents a study of a Discord private server with thirteen multilingual gamers, aiming to explore the elements that construct a multilingual online gaming community. The study focuses on the communication practices of four Saudi female and male gamers, using various data collection methods, including online observations through recorded videos and screenshots, interviews, and informal conversations for one year. The primary findings show that translanguaging was a prominent feature of their verbal and textual communication practices. Besides, these practices that mostly accompany cultural ones were used to facilitate communication and express their identities in an intercultural context.

Keywords: online community construction, perceptions, multilingualism, digital identity

Procedia PDF Downloads 81
18214 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 448
18213 Implementing Education 4.0 Trends in Language Learning

Authors: Luz Janeth Ospina M.

Abstract:

The fourth industrial revolution is changing the role of education substantially and, therefore, the role of instructors and learners at all levels. Education 4.0 is an imminent response to the needs of a globalized world where humans and technology are being aligned to enable endless possibilities, among them the need for students, as digital natives, to communicate effectively in at least one language besides their mother tongue, and also the requirement of developing theirs. This is an exploratory study in which a control group (N = 21), all of the students of Spanish as a foreign language at the university level, after taking a Spanish class, responded to an online questionnaire about the engagement, atmosphere, and environment in which their course was delivered. These aspects considered in the survey were relative to the instructor’s teaching style, including: (a) active, hands-on learning; (b) flexibility for in-class activities, easily switching between small group work, individual work, and whole-class discussion; and (c) integrating technology into the classroom. Strongly believing in these principles, the instructor deliberately taught the course in a SCALE-UP room, as it could facilitate such a positive and encouraging learning environment. These aspects are trends related to Education 4.0 and have become integral to the instructor’s pedagogical stance that calls for a constructive-affective role, instead of a transmissive one. As expected, with a learning environment that (a) fosters student engagement and (b) improves student outcomes, the subjects were highly engaged, which was partially due to the learning environment. An overwhelming majority (all but one) of students agreed or strongly agreed that the atmosphere and the environment were ideal. Outcomes of this study are relevant and indicate that it is about time for teachers to build up a meaningful correlation between humans and technology. We should see the trends of Education 4.0 not as a threat but as practices that should be in the hands of critical and creative instructors whose pedagogical stance responds to the needs of the learners in the 21st century.

Keywords: active learning, education 4.0, higher education, pedagogical stance

Procedia PDF Downloads 111
18212 Landsat Data from Pre Crop Season to Estimate the Area to Be Planted with Summer Crops

Authors: Valdir Moura, Raniele dos Anjos de Souza, Fernando Gomes de Souza, Jose Vagner da Silva, Jerry Adriani Johann

Abstract:

The estimate of the Area of Land to be planted with annual crops and its stratification by the municipality are important variables in crop forecast. Nowadays in Brazil, these information’s are obtained by the Brazilian Institute of Geography and Statistics (IBGE) and published under the report Assessment of the Agricultural Production. Due to the high cloud cover in the main crop growing season (October to March) it is difficult to acquire good orbital images. Thus, one alternative is to work with remote sensing data from dates before the crop growing season. This work presents the use of multitemporal Landsat data gathered on July and September (before the summer growing season) in order to estimate the area of land to be planted with summer crops in an area of São Paulo State, Brazil. Geographic Information Systems (GIS) and digital image processing techniques were applied for the treatment of the available data. Supervised and non-supervised classifications were used for data in digital number and reflectance formats and the multitemporal Normalized Difference Vegetation Index (NDVI) images. The objective was to discriminate the tracts with higher probability to become planted with summer crops. Classification accuracies were evaluated using a sampling system developed basically for this study region. The estimated areas were corrected using the error matrix derived from these evaluations. The classification techniques presented an excellent level according to the kappa index. The proportion of crops stratified by municipalities was derived by a field work during the crop growing season. These proportion coefficients were applied onto the area of land to be planted with summer crops (derived from Landsat data). Thus, it was possible to derive the area of each summer crop by the municipality. The discrepancies between official statistics and our results were attributed to the sampling and the stratification procedures. Nevertheless, this methodology can be improved in order to provide good crop area estimates using remote sensing data, despite the cloud cover during the growing season.

Keywords: area intended for summer culture, estimated area planted, agriculture, Landsat, planting schedule

Procedia PDF Downloads 145
18211 Resistance Training Contribution to the Aerobic Component of the International Physical Activity Guidelines in Adults

Authors: Neha Bharti, Martin Sénéchal, Danielle R. Bouchard

Abstract:

Mostly attributed to lack of time, only 15% of adults currently reach the International Physical Activity Guidelines, which state that every adult should achieve minimum of 150 minutes of aerobic exercise per week at moderate to vigorous intensity in minimum bouts of 10 minutes each, in addition to two days of resistance training. Recent studies have suggested that any bout of aerobic exercise reaching moderate intensity has potential to improve health. If one could reach moderate intensity while doing resistance training, this could reduce the total weekly time involvement to reach the International Physical Activity Guidelines. Objectives: 1) To determine whether overweight and older adults can reach a minimum of moderate intensity while doing resistance training compared with young non-overweight adults, 2) To identify if the proportion of time spent at moderate to vigorous intensity is different in overweight adults and older adults when compared with young non-overweight adults when lifting 70% or 80% of maximal load, 3) To determine variables associated with proportion of time spent at moderate to vigorous intensity while doing resistance training. Methods: Sixty participants already doing resistance training were recruited (20 young non-overweight adults, 20 overweight adults, and 20 older adults). Participants visited fitness facility three times, separated by at least 48 hours, and performed eight resistance exercises each time. First visit was to collect baseline measurements and to measure maximal load for each of the eight exercises. Second and third visits were performed wearing a heart rate monitor to record heart rate and to measure exercise intensity. The two exercise sessions were performed at 70% and 80% of maximal capacity. Moderate intensity was defined as 40% of heart rate reserve. Results: The proportion of time spent at moderate to vigorous intensity ranged from 51% to 93% among the three groups. No difference was observed between the young group and the overweight adults group in the proportion of time spent at moderate to vigorous intensity, 82.6% (69.2-94.6) vs 92.5% (73.3-99.1). However, older adults spent lower proportion of time at moderate to vigorous intensity for both sessions 51.5% (22.0-86.6); P < .01. When doing resistance training at 70% and 80% of maximal capacity, the proportion of time spent at moderate to vigorous intensity was 82.3% (56.1-94.7) and 82.0% (59.2-98.0) with no significant difference (P=.83). Conclusion: This study suggests that overweight adults and older adults can reach moderate intensity for at least 51% of the time spent doing resistance training. However, time spent at moderate to vigorous intensity was lower for older adults compared to young non-overweight adults. For adults aged 60 or less, three resistance training sessions of 60 minutes weekly could be enough to reach both aerobic and resistance training components of the International Physical Activity Guidelines. Further research is needed to test if resistance training at moderate to vigorous intensity can have the same health benefits compared with adults completing the International Physical Activity Guidelines as currently suggested.

Keywords: aerobic exercise, international physical activity guidelines, moderate to vigorous intensity, resistance training

Procedia PDF Downloads 533
18210 A Dual Channel Optical Sensor for Norepinephrine via Situ Generated Silver Nanoparticles

Authors: Shalini Menon, K. Girish Kumar

Abstract:

Norepinephrine (NE) is one of the naturally occurring catecholamines which act both as a neurotransmitter and a hormone. Catecholamine levels are used for the diagnosis and regulation of phaeochromocytoma, a neuroendocrine tumor of the adrenal medulla. The development of simple, rapid and cost-effective sensors for NE still remains a great challenge. Herein, a dual-channel sensor has been developed for the determination of NE. A mixture of AgNO₃, NaOH, NH₃.H₂O and cetrimonium bromide in appropriate concentrations was taken as the working solution. To the thoroughly vortexed mixture, an appropriate volume of NE solution was added. After a particular time, the fluorescence and absorbance were measured. Fluorescence measurements were made by exciting at a wavelength of 400 nm. A dual-channel optical sensor has been developed for the colorimetric as well as the fluorimetric determination of NE. Metal enhanced fluorescence property of nanoparticles forms the basis of the fluorimetric detection of this assay, whereas the appearance of brown color in the presence of NE leads to colorimetric detection. Wide linear ranges and sub-micromolar detection limits were obtained using both the techniques. Moreover, the colorimetric approach was applied for the determination of NE in synthetic blood serum and the results obtained were compared with the classic high-performance liquid chromatography (HPLC) method. Recoveries between 97% and 104% were obtained using the proposed method. Based on five replicate measurements, relative standard deviation (RSD) for NE determination in the examined synthetic blood serum was found to be 2.3%. This indicates the reliability of the proposed sensor for real sample analysis.

Keywords: norepinephrine, colorimetry, fluorescence, silver nanoparticles

Procedia PDF Downloads 110
18209 Memory Types in Hemodialysis (HD) Patients; A Study Based on Hemodialysis Duration, Zahedan: South East of Iran

Authors: Behnoush Sabayan, Ali Alidadi, Saeid Ebarhimi, N. M. Bakhshani

Abstract:

Hemodialysis (HD) patients are at a high risk of atherosclerotic and vascular disease; also little information is available for the HD impact on brain structure of these patients. We studied the brain abnormalities in HD patients. The aim of this study was to investigate the effect of long term HD on brain structure of HD patients. Non-contrast MRI was used to evaluate imaging findings. Our study included 80 HD patients of whom 39 had less than six months of HD and 41 patients had a history of HD more than six months. The population had a mean age of 51.60 years old and 27.5% were female. According to study, HD patients who have been hemodialyzed for a long time (median time of HD was up to 4 years) had small vessel ischemia than the HD patients who underwent HD for a shorter term, which the median time was 3 to 5 months. Most of the small vessel ischemia was located in pre-ventricular, subcortical and white matter (1.33± .471, 1.23± .420 and 1.39±.490). However, the other brain damages like: central pons abnormality, global brain atrophy, thinning of corpus callosum and frontal lobe atrophy were found (P<0.01). The present study demonstrated that HD patients who were under HD for a longer time had small vessel ischemia and we conclude that this small vessel ischemia might be a causative mechanism of brain atrophy in chronic hemodialysis patients. However, additional researches are needed in this area.

Keywords: Hemodialysis Patients, Duration of Hemodialysis, MRI, Zahedan

Procedia PDF Downloads 210
18208 Comparison of Noise Emissions in the Interior of Passenger Cars

Authors: Martin Kendra, Tomas Skrucany, Jaroslav Masek

Abstract:

The noise is one of the negative elements influencing the human health. This article is due to the measurement of noise emitted by road vehicle and its parts during the operation. Measurement was done in the interior of common passenger cars with a digital sound meter. The results compare the noise value in different cars with different body shape, which influences the driver’s health. Transport has considerable ecological effects, many of them detrimental to environmental sustainability. Roads and traffic exert a variety of direct and mostly detrimental effects on nature.

Keywords: driver, noise measurement, passenger road vehicle, road transport

Procedia PDF Downloads 443
18207 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition

Authors: A. Shoiynbek, K. Kozhakhmet, P. Menezes, D. Kuanyshbay, D. Bayazitov

Abstract:

Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks.

Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset

Procedia PDF Downloads 93
18206 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 147