Search results for: speech signal
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2308

Search results for: speech signal

268 Taguchi-Based Surface Roughness Optimization for Slotted and Tapered Cylindrical Products in Milling and Turning Operations

Authors: Vineeth G. Kuriakose, Joseph C. Chen, Ye Li

Abstract:

The research follows a systematic approach to optimize the parameters for parts machined by turning and milling processes. The quality characteristic chosen is surface roughness since the surface finish plays an important role for parts that require surface contact. A tapered cylindrical surface is designed as a test specimen for the research. The material chosen for machining is aluminum alloy 6061 due to its wide variety of industrial and engineering applications. HAAS VF-2 TR computer numerical control (CNC) vertical machining center is used for milling and HAAS ST-20 CNC machine is used for turning in this research. Taguchi analysis is used to optimize the surface roughness of the machined parts. The L9 Orthogonal Array is designed for four controllable factors with three different levels each, resulting in 18 experimental runs. Signal to Noise (S/N) Ratio is calculated for achieving the specific target value of 75 ± 15 µin. The controllable parameters chosen for turning process are feed rate, depth of cut, coolant flow and finish cut and for milling process are feed rate, spindle speed, step over and coolant flow. The uncontrollable factors are tool geometry for turning process and tool material for milling process. Hypothesis testing is conducted to study the significance of different uncontrollable factors on the surface roughnesses. The optimal parameter settings were identified from the Taguchi analysis and the process capability Cp and the process capability index Cpk were improved from 1.76 and 0.02 to 3.70 and 2.10 respectively for turning process and from 0.87 and 0.19 to 3.85 and 2.70 respectively for the milling process. The surface roughnesses were improved from 60.17 µin to 68.50 µin, reducing the defect rate from 52.39% to 0% for the turning process and from 93.18 µin to 79.49 µin, reducing the defect rate from 71.23% to 0% for the milling process. The purpose of this study is to efficiently utilize the Taguchi design analysis to improve the surface roughness.

Keywords: surface roughness, Taguchi parameter design, CNC turning, CNC milling

Procedia PDF Downloads 138
267 Acoustic Emission for Tool-Chip Interface Monitoring during Orthogonal Cutting

Authors: D. O. Ramadan, R. S. Dwyer-Joyce

Abstract:

The measurement of the interface conditions in a cutting tool contact is essential information for performance monitoring and control. This interface provides the path for the heat flux to the cutting tool. This elevate in the cutting tool temperature leads to motivate the mechanism of tool wear, thus affect the life of the cutting tool and the productivity. This zone is representative by the tool-chip interface. Therefore, understanding and monitoring this interface is considered an important issue in machining. In this paper, an acoustic emission (AE) technique was used to find the correlation between AE parameters and the tool-chip interface. For this reason, a response surface design (RSD) has been used to analyse and optimize the machining parameters. The experiment design was based on the face centered, central composite design (CCD) in the Minitab environment. According to this design, a series of orthogonal cutting experiments for different cutting conditions were conducted on a Triumph 2500 lathe machine to study the sensitivity of the acoustic emission (AE) signal to change in tool-chip contact length. The cutting parameters investigated were the cutting speed, depth of cut, and feed and the experiments were performed for 6082-T6 aluminium tube. All the orthogonal cutting experiments were conducted unlubricated. The tool-chip contact area was investigated using a scanning electron microscope (SEM). The results obtained in this paper indicate that there is a strong dependence of the root mean square (RMS) on the cutting speed, where the RMS increases with increasing the cutting speed. A dependence on the tool-chip contact length has been also observed. However there was no effect observed of changing the cutting depth and feed on the RMS. These dependencies have been clarified in terms of the strain and temperature in the primary and secondary shear zones, also the tool-chip sticking and sliding phenomenon and the effect of these mechanical variables on dislocation activity at high strain rates. In conclusion, the acoustic emission technique has the potential to monitor in situ the tool-chip interface in turning and consequently could indicate the approaching end of life of a cutting tool.

Keywords: Acoustic emission, tool-chip interface, orthogonal cutting, monitoring

Procedia PDF Downloads 469
266 A Microsurgery-Specific End-Effector Equipped with a Bipolar Surgical Tool and Haptic Feedback

Authors: Hamidreza Hoshyarmanesh, Sanju Lama, Garnette R. Sutherland

Abstract:

In tele-operative robotic surgery, an ideal haptic device should be equipped with an intuitive and smooth end-effector to cover the surgeon’s hand/wrist degrees of freedom (DOF) and translate the hand joint motions to the end-effector of the remote manipulator with low effort and high level of comfort. This research introduces the design and development of a microsurgery-specific end-effector, a gimbal mechanism possessing 4 passive and 1 active DOFs, equipped with a bipolar forceps and haptic feedback. The robust gimbal structure is comprised of three light-weight links/joint, pitch, yaw, and roll, each consisting of low-friction support and a 2-channel accurate optical position sensor. The third link, which provides the tool roll, was specifically designed to grip the tool prongs and accommodate a low mass geared actuator together with a miniaturized capstan-rope mechanism. The actuator is able to generate delicate torques, using a threaded cylindrical capstan, to emulate the sense of pinch/coagulation during conventional microsurgery. While the tool left prong is fixed to the rolling link, the right prong bears a miniaturized drum sector with a large diameter to expand the force scale and resolution. The drum transmits the actuator output torque to the right prong and generates haptic force feedback at the tool level. The tool is also equipped with a hall-effect sensor and magnet bar installed vis-à-vis on the inner side of the two prongs to measure the tooltip distance and provide an analogue signal to the control system. We believe that such a haptic end-effector could significantly increase the accuracy of telerobotic surgery and help avoid high forces that are known to cause bleeding/injury.

Keywords: end-effector, force generation, haptic interface, robotic surgery, surgical tool, tele-operation

Procedia PDF Downloads 104
265 Analyzing Safety Incidents using the Fatigue Risk Index Calculator as an Indicator of Fatigue within a UK Rail Franchise

Authors: Michael Scott Evans, Andrew Smith

Abstract:

The feeling of fatigue at work could potentially have devastating consequences. The aim of this study was to investigate whether the well-established objective indicator of fatigue – the Fatigue Risk Index (FRI) calculator used by the rail industry is an effective indicator to the number of safety incidents, in which fatigue could have been a contributing factor. The study received ethics approval from Cardiff University’s Ethics Committee (EC.16.06.14.4547). A total of 901 safety incidents were recorded from a single British rail franchise between 1st June 2010 – 31st December 2016, into the Safety Management Information System (SMIS). The safety incident types identified that fatigue could have been a contributing factor were: Signal Passed at Danger (SPAD), Train Protection & Warning System (TPWS) activation, Automatic Warning System (AWS) slow to cancel, failed to call, and station overrun. From the 901 recorded safety incidents, the scheduling system CrewPlan was used to extract the Fatigue Index (FI) score and Risk Index (RI) score of all train drivers on the day of the safety incident. Only the working rosters of 64.2% (N = 578) (550 men and 28 female) ranging in age from 24 – 65 years old (M = 47.13, SD = 7.30) were accessible for analyses. Analysis from all 578 train drivers who were involved in safety incidents revealed that 99.8% (N = 577) of Fatigue Index (FI) scores fell within or below the identified guideline threshold of 45 as well as 97.9% (N = 566) of Risk Index (RI) scores falling below the 1.6 threshold range. Their scores represent good practice within the rail industry. These findings seem to indicate that the current objective indicator, i.e. the FRI calculator used in this study by the British rail franchise was not an effective predictor of train driver’s FI scores and RI scores, as safety incidents in which fatigue could have been a contributing factor represented only 0.2% of FI scores and 2.1% of RI scores. Further research is needed to determine whether there are other contributing factors that could provide a better indication as to why there is such a significantly large proportion of train drivers who are involved in safety incidents, in which fatigue could have been a contributing factor have such low FI and RI scores.

Keywords: fatigue risk index calculator, objective indicator of fatigue, rail industry, safety incident

Procedia PDF Downloads 167
264 Microfabrication of Three-Dimensional SU-8 Structures Using Positive SPR Photoresist as a Sacrificial Layer for Integration of Microfluidic Components on Biosensors

Authors: Su Yin Chiam, Qing Xin Zhang, Jaehoon Chung

Abstract:

Complementary metal-oxide-semiconductor (CMOS) integrated circuits (ICs) have obtained increased attention in the biosensor community because CMOS technology provides cost-effective and high-performance signal processing at a mass-production level. In order to supply biological samples and reagents effectively to the sensing elements, there are increasing demands for seamless integration of microfluidic components on the fabricated CMOS wafers by post-processing. Although the PDMS microfluidic channels replicated from separately prepared silicon mold can be typically aligned and bonded onto the CMOS wafers, it remains challenging owing the inherently limited aligning accuracy ( > ± 10 μm) between the two layers. Here we present a new post-processing method to create three-dimensional microfluidic components using two different polarities of photoresists, an epoxy-based negative SU-8 photoresist and positive SPR220-7 photoresist. The positive photoresist serves as a sacrificial layer and the negative photoresist was utilized as a structural material to generate three-dimensional structures. Because both photoresists are patterned using a standard photolithography technology, the dimensions of the structures can be effectively controlled as well as the alignment accuracy, moreover, is dramatically improved (< ± 2 μm) and appropriately can be adopted as an alternative post-processing method. To validate the proposed processing method, we applied this technique to build cell-trapping structures. The SU8 photoresist was mainly used to generate structures and the SPR photoresist was used as a sacrificial layer to generate sub-channel in the SU8, allowing fluid to pass through. The sub-channel generated by etching the sacrificial layer works as a cell-capturing site. The well-controlled dimensions enabled single-cell capturing on each site and high-accuracy alignment made cells trapped exactly on the sensing units of CMOS biosensors.

Keywords: SU-8, microfluidic, MEMS, microfabrication

Procedia PDF Downloads 496
263 Radar Track-based Classification of Birds and UAVs

Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo

Abstract:

In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).

Keywords: birds, classification, machine learning, UAVs

Procedia PDF Downloads 199
262 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver

Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto

Abstract:

The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.

Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC

Procedia PDF Downloads 117
261 A Model for Teaching Arabic Grammar in Light of the Common European Framework of Reference for Languages

Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla

Abstract:

The complexity of Arabic grammar poses challenges for learners, particularly in relation to its arrangement, classification, abundance, and bifurcation. The challenge at hand is a result of the contextual factors that gave rise to the grammatical rules in question, as well as the pedagogical approach employed at the time, which was tailored to the needs of learners during that particular historical period. Consequently, modern-day students encounter this same obstacle. This requires a thorough examination of the arrangement and categorization of Arabic grammatical rules based on particular criteria, as well as an assessment of their objectives. Additionally, it is necessary to identify the prevalent and renowned grammatical rules, as well as those that are infrequently encountered, obscure and disregarded. This paper presents a compilation of grammatical rules that require arrangement and categorization in accordance with the standards outlined in the Common European Framework of Reference for Languages (CEFR). In addition to facilitating comprehension of the curriculum, accommodating learners' requirements, and establishing the fundamental competencies for achieving proficiency in Arabic, it is imperative to ascertain the conventions that language learners necessitate in alignment with explicitly delineated benchmarks such as the CEFR criteria. The aim of this study is to reduce the quantity of grammatical rules that are typically presented to non-native Arabic speakers in Arabic textbooks. This reduction is expected to enhance the motivation of learners to continue their Arabic language acquisition and to approach the level of proficiency of native speakers. The primary obstacle faced by learners is the intricate nature of Arabic grammar, which poses a significant challenge in the realm of study. The proliferation and complexity of regulations evident in Arabic language textbooks designed for individuals who are not native speakers is noteworthy. The inadequate organisation and delivery of the material create the impression that the grammar is being imparted to a student with the intention of memorising "Alfiyyat-Ibn-Malik." Consequently, the sequence of grammatical rules instruction was altered, with rules originally intended for later instruction being presented first and those intended for earlier instruction being presented subsequently. Students often focus on learning grammatical rules that are not necessarily required while neglecting the rules that are commonly used in everyday speech and writing. Non-Arab students are taught Arabic grammar chapters that are infrequently utilised in Arabic literature and may be a topic of debate among grammarians. The aforementioned findings are derived from the statistical analysis and investigations conducted by the researcher, which will be disclosed in due course of the research. To instruct non-Arabic speakers on grammatical rules, it is imperative to discern the most prevalent grammatical frameworks in grammar manuals and linguistic literature (study sample). The present proposal suggests the allocation of grammatical structures across linguistic levels, taking into account the guidelines of the CEFR, as well as the grammatical structures that are necessary for non-Arabic-speaking learners to generate a modern, cohesive, and comprehensible language.

Keywords: grammar, Arabic, functional, framework, problems, standards, statistical, popularity, analysis

Procedia PDF Downloads 71
260 Developing Writing Skills of Learners with Persistent Literacy Difficulties through the Explicit Teaching of Grammar in Context: Action Research in a Welsh Secondary School

Authors: Jean Ware, Susan W. Jones

Abstract:

Background: The benefits of grammar instruction in the teaching of writing is contested in most English speaking countries. A majority of Anglophone countries abandoned the teaching of grammar in the 1950s based on the conclusions that it had no positive impact on learners’ development of reading, writing, and language. Although the decontextualised teaching of grammar is not helpful in improving writing, a curriculum with a focus on grammar in an embedded and meaningful way can help learners develop their understanding of the mechanisms of language. Although British learners are generally not taught grammar rules explicitly, learners in schools in France, the Netherlands, and Germany are taught explicitly about the structure of their own language. Exposing learners to grammatical analysis can help them develop their understanding of language. Indeed, if learners are taught that each part of speech has an identified role in the sentence. This means that rather than have to memorise lists of words or spelling patterns, they can focus on determining each word or phrase’s task in the sentence. These processes of categorisation and deduction are higher order thinking skills. When considering definitions of dyslexia available in Great Britain, the explicit teaching of grammar in context could help learners with persistent literacy difficulties. Indeed, learners with dyslexia often develop strengths in problem solving; the teaching of grammar could, therefore, help them develop their understanding of language by using analytical and logical thinking. Aims: This study aims at gaining a further understanding of how the explicit teaching of grammar in context can benefit learners with persistent literacy difficulties. The project is designed to identify ways of adapting existing grammar focussed teaching materials so that learners with specific learning difficulties such as dyslexia can use them to further develop their writing skills. It intends to improve educational practice through action, analysis and reflection. Research Design/Methods: The project, therefore, uses an action research design and multiple sources of evidence. The data collection tools used were standardised test data, teacher assessment data, semi-structured interviews, learners’ before and after attempts at a writing task at the beginning and end of the cycle, documentary data and lesson observation carried out by a specialist teacher. Existing teaching materials were adapted for use with five Year 9 learners who had experienced persistent literacy difficulties from primary school onwards. The initial adaptations included reducing the amount of content to be taught in each lesson, and pre teaching some of the metalanguage needed. Findings: Learners’ before and after attempts at the writing task were scored by a colleague who did not know the order of the attempts. All five learners’ scores were higher on the second writing task. Learners reported that they had enjoyed the teaching approach. They also made suggestions to be included in the second cycle, as did the colleague who carried out observations. Conclusions: Although this is a very small exploratory study, these results suggest that adapting grammar focused teaching materials shows promise for helping learners with persistent literacy difficulties develop their writing skills.

Keywords: explicit teaching of grammar in context, literacy acquisition, persistent literacy difficulties, writing skills

Procedia PDF Downloads 140
259 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control

Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin

Abstract:

The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.

Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics

Procedia PDF Downloads 62
258 A Gold-Based Nanoformulation for Delivery of the CRISPR/Cas9 Ribonucleoprotein for Genome Editing

Authors: Soultana Konstantinidou, Tiziana Schmidt, Elena Landi, Alessandro De Carli, Giovanni Maltinti, Darius Witt, Alicja Dziadosz, Agnieszka Lindstaedt, Michele Lai, Mauro Pistello, Valentina Cappello, Luciana Dente, Chiara Gabellini, Piotr Barski, Vittoria Raffa

Abstract:

CRISPR/Cas9 technology has gained the interest of researchers in the field of biotechnology for genome editing. Since its discovery as a microbial adaptive immune defense, this system has been widely adopted and is acknowledged for having a variety of applications. However, critical barriers related to safety and delivery are persisting. Here, we propose a new concept of genome engineering, which is based on a nano-formulation of Cas9. The Cas9 enzyme was conjugated to a gold nanoparticle (AuNP-Cas9). The AuNP-Cas9 maintained its cleavage efficiency in vitro, to the same extent as the ribonucleoprotein, including non-conjugated Cas9 enzyme, and showed high gene editing efficiency in vivo in zebrafish embryos. Since CRISPR/Cas9 technology is extensively used in cancer research, melanoma was selected as a validation target. Cell studies were performed in A375 human melanoma cells. Particles per se had no impact on cell metabolism and proliferation. Intriguingly, the AuNP-Cas9 internalized spontaneously in cells and localized as a single particle in the cytoplasm and organelles. More importantly, the AuNP-Cas9 showed a high nuclear localization signal. The AuNP-Cas9, overcoming the delivery difficulties of Cas9, could be used in cellular biology and localization studies. Taking advantage of the plasmonic properties of gold nanoparticles, this technology could potentially be a bio-tool for combining gene editing and photothermal therapy in cancer cells. Further work will be focused on intracellular interactions of the nano-formulation and characterization of the optical properties.

Keywords: CRISPR/Cas9, gene editing, gold nanoparticles, nanotechnology

Procedia PDF Downloads 84
257 Perceptions of Teachers toward Inclusive Education Focus on Hearing Impairment

Authors: Chalise Kiran

Abstract:

The prime idea of inclusive education is to mainstream every child in education. However, it will be challenging for implementation when there are policy and practice gaps. It will be even more challenging when children have disabilities. Generally, the focus will be on the policy gap, but the problem may not always be with policy. The proper practice could be a challenge in the countries like Nepal. In determining practice, the teachers’ perceptions toward inclusive will play a vital role. Nepal has categorized disability in 7 types (physical, visual, hearing, vision/hearing, speech, mental, and multiple). Out of these, hearing impairment is the study realm. In the context of a limited number of researches on children with disabilities and rare researches on CWHI and their education in Nepal, this study is a pioneering effort in knowing basically the problems and challenges of CWHI focused on inclusive education in the schools including gaps and barriers in its proper implementation. Philosophically, the paradigm of the study is post-positivism. In the post-positivist worldview, the quantitative approach with the description of the situation and inferential relationship are revealed out in the study. This is related to the natural model of objective reality. The data were collected from an individual survey with the teachers and head teachers of 35 schools in Nepal. The survey questionnaire was prepared and filled by the respondents from the schools where the CWHI study in 7 provincial 20 districts of Nepal. Through these considerations, the perceptions of CWHI focused inclusive education were explored in the study. The data were analyzed using both descriptive and inferential tools on which the Likert scale-based analysis was done for descriptive analysis, and chi-square mathematical tool was used to know the significant relationship between dependent variables and independent variables. The descriptive analysis showed that the majority of teachers have positive perceptions toward implementing CWHI focused inclusive education, and the majority of them have positive perceptions toward CWHI focused inclusive education, though there are some problems and challenges. The study has found out the major challenges and problems categorically. Some of them are: a large number of students in a single class; availability of generic textbooks for CWHI and no availability of textbooks to all students; less opportunity for teachers to acquire knowledge on CWHI; not adequate teachers in the schools; no flexibility in the curriculum; less information system in schools; no availability of educational consular; disaster-prone students; no child abuse control strategy; no disabled-friendly schools; no free health check-up facility; no participation of the students in school activities and in child clubs and so on. By and large, it is found that teachers’ age, gender, years of experience, position, employment status, and disability with him or her show no statistically significant relation to successfully implement CWHI focused inclusive education and perceptions to CWHI focused inclusive education in schools. However, in some of the cases, the set null hypothesis was rejected, and some are completely retained. The study has suggested policy implications, implications for educational authority, and implications for teachers and parents categorically.

Keywords: children with hearing impairment, disability, inclusive education, perception

Procedia PDF Downloads 97
256 The Effect of Common Daily Schedule on the Human Circadian Rhythms during the Polar Day on Svalbard: Field Study

Authors: Kamila Weissova, Jitka Skrabalova, Katerina Skalova, Jana Koprivova, Zdenka Bendova

Abstract:

Any Arctic visitor has to deal with extreme conditions, including constant light during the summer season or constant darkness during winter time. Light/dark cycle is the most powerful synchronizing signal for biological clock and the absence of daily dark period during the polar day can significantly alter the functional state of the internal clock. However, the inner clock can be synchronized by other zeitgebers such as physical activity, food intake or social interactions. Here, we investigated the effect of polar day on circadian clock of 10 researchers attending the polar base station in the Svalbard region during July. The data obtained on Svalbard were compared with the data obtained before the researchers left for the expedition (in the Czech Republic). To determine the state of circadian clock we used wrist actigraphy followed by sleep diaries, saliva, and buccal mucosa samples, both collected every 4 hours during 24h-interval to detect melatonin by radioimmunoassay and clock gene (PER1, BMAL1, NR1D1, DBP) mRNA levels by RT-qPCR. The clock gene expression was analyzed using cosinor analysis. From our results, it is apparent that the constant sunlight delayed melatonin onset and postponed the physical activity in the same order. Nevertheless, the clock gene expression displayed higher amplitude on Svalbard compared to the amplitude detected in the Czech Republic. These results have suggested that the common daily schedule at the Svalbard expedition can strengthen circadian rhythm in the environment that is lacking light/dark cycle. In conclusion, the constant sunlight delays melatonin onset, but it still maintains its rhythmic secretion. The effect of constant sunlight on circadian clock can be minimalized by common daily scheduled activity.

Keywords: actighraph, clock genes, human, melatonin, polar day

Procedia PDF Downloads 154
255 Integrated On-Board Diagnostic-II and Direct Controller Area Network Access for Vehicle Monitoring System

Authors: Kavian Khosravinia, Mohd Khair Hassan, Ribhan Zafira Abdul Rahman, Syed Abdul Rahman Al-Haddad

Abstract:

The CAN (controller area network) bus is introduced as a multi-master, message broadcast system. The messages sent on the CAN are used to communicate state information, referred as a signal between different ECUs, which provides data consistency in every node of the system. OBD-II Dongles that are based on request and response method is the wide-spread solution for extracting sensor data from cars among researchers. Unfortunately, most of the past researches do not consider resolution and quantity of their input data extracted through OBD-II technology. The maximum feasible scan rate is only 9 queries per second which provide 8 data points per second with using ELM327 as well-known OBD-II dongle. This study aims to develop and design a programmable, and latency-sensitive vehicle data acquisition system that improves the modularity and flexibility to extract exact, trustworthy, and fresh car sensor data with higher frequency rates. Furthermore, the researcher must break apart, thoroughly inspect, and observe the internal network of the vehicle, which may cause severe damages to the expensive ECUs of the vehicle due to intrinsic vulnerabilities of the CAN bus during initial research. Desired sensors data were collected from various vehicles utilizing Raspberry Pi3 as computing and processing unit with using OBD (request-response) and direct CAN method at the same time. Two types of data were collected for this study. The first, CAN bus frame data that illustrates data collected for each line of hex data sent from an ECU and the second type is the OBD data that represents some limited data that is requested from ECU under standard condition. The proposed system is reconfigurable, human-readable and multi-task telematics device that can be fitted into any vehicle with minimum effort and minimum time lag in the data extraction process. The standard operational procedure experimental vehicle network test bench is developed and can be used for future vehicle network testing experiment.

Keywords: CAN bus, OBD-II, vehicle data acquisition, connected cars, telemetry, Raspberry Pi3

Procedia PDF Downloads 182
254 Mandate of Heaven and Serving the People in Chinese Political Rhetoric: An Evolving Discourse System across Three Thousand Years

Authors: Weixiao Wei, Chris Shei

Abstract:

This paper describes Mandate of Heaven as a source of justification for the ruling regime from ancient China approximately three thousand years ago. Initially, the kings of Shang dynasty simply nominated themselves as the sons of Heaven sent to Earth to rule the common people. As the last generation of the kings became corrupted and ruled withbrutal force and crueltywhich directly caused their destruction, the successive kings of Zhou dynasty realised the importance of virtue and the provision of goods to the people. Legitimacy of the ruling regimes became rested not entirely on random allocation of the throne by an unknown supernatural force but on a foundation comprising morality and the ability to provide goods. The latter composite was picked up by the current ruling regime, the Chinese Communist Party, and became the cornerstone of its political legitimacy, also known as ‘performance legitimacy’ where economic development accounts for the satisfaction of the people in place of election and other democratic means of providing legal-rational legitimacy. Under this circumstance, it becomes important as well for the ruling party to use political rhetoric to convince people of the good performance of the government in the economy, morality, and foreign policy. Thus, we see a lot of propaganda materials in both government policy statements and international press conference announcements. The former consists mainly of important speeches made by prominent figures in Party conferences which are not only made publicly available on the government websites but also become obligatory reading materials for university entrance examinations. The later consists of announcements about foreign policies and strategies and actions taken by the government regarding foreign affairsmade in international conferences and offered in Chinese-English bilingual versions on official websites. This documentation strategy creates an impressive image of the Chinese Communist Party that is domestically competent and international strong, taking care of the people it governs in terms of economic needs and defending the country against any foreign interference and global adversities. This political discourse system comprising reading materials fully extractable from government websites also becomes excellent repertoire for teaching and researching in contemporary Chinese language, discourse and rhetoric, Chinese culture and tradition, Chinese political ideology, and Chinese-English translation. This paper aims to provide a detailed and comprehensive description of the current Chinese political discourse system, arguing about its lineage from the rhetorical convention of Mandate of Heaven in ancient China and its current concentration on serving the people in place of election, human rights, and freedom of speech. The paper will also provide guidelines as to how this discourse system and the manifestation of official documents created under this system can become excellent research and teaching materials in applied linguistics.

Keywords: mandate of heaven, Chinese communist party, performance legitimacy, serving the people, political discourse

Procedia PDF Downloads 90
253 A Comparative Analysis of Various Companding Techniques Used to Reduce PAPR in VLC Systems

Authors: Arushi Singh, Anjana Jain, Prakash Vyavahare

Abstract:

Recently, Li-Fi(light-fiedelity) has been launched based on VLC(visible light communication) technique, 100 times faster than WiFi. Now 5G mobile communication system is proposed to use VLC-OFDM as the transmission technique. The VLC system focused on visible rays, is considered for efficient spectrum use and easy intensity modulation through LEDs. The reason of high speed in VLC is LED, as they flicker incredibly fast(order of MHz). Another advantage of employing LED is-it acts as low pass filter results no out-of-band emission. The VLC system falls under the category of ‘green technology’ for utilizing LEDs. In present scenario, OFDM is used for high data-rates, interference immunity and high spectral efficiency. Inspite of the advantages OFDM suffers from large PAPR, ICI among carriers and frequency offset errors. Since, the data transmission technique used in VLC system is OFDM, the system suffers the drawbacks of OFDM as well as VLC, the non-linearity dues to non-linear characteristics of LED and PAPR of OFDM due to which the high power amplifier enters in non-linear region. The proposed paper focuses on reduction of PAPR in VLC-OFDM systems. Many techniques are applied to reduce PAPR such as-clipping-introduces distortion in the carrier; selective mapping technique-suffers wastage of bandwidth; partial transmit sequence-very complex due to exponentially increased number of sub-blocks. The paper discusses three companding techniques namely- µ-law, A-law and advance A-law companding technique. The analysis shows that the advance A-law companding techniques reduces the PAPR of the signal by adjusting the companding parameter within the range. VLC-OFDM systems are the future of the wireless communication but non-linearity in VLC-OFDM is a severe issue. The proposed paper discusses the techniques to reduce PAPR, one of the non-linearities of the system. The companding techniques mentioned in this paper provides better results without increasing the complexity of the system.

Keywords: non-linear companding techniques, peak to average power ratio (PAPR), visible light communication (VLC), VLC-OFDM

Procedia PDF Downloads 270
252 Unveiling the Detailed Turn Off-On Mechanism of Carbon Dots to Different Sized MnO₂ Nanosensor for Selective Detection of Glutathione

Authors: Neeraj Neeraj, Soumen Basu, Banibrata Maity

Abstract:

Glutathione (GSH) is one of the most important biomolecules having small molecular weight, which helps in various cellular functions like regulation of gene, xenobiotic metabolism, preservation of intracellular redox activities, signal transduction, etc. Therefore, the detection of GSH requires huge attention by using extremely selective and sensitive techniques. Herein, a rapid fluorometric nanosensor is designed by combining carbon dots (Cdots) and MnO₂ nanoparticles of different sizes for the detection of GSH. The bottom-up approach, i.e., microwave method, was used for the preparation of the water soluble and greatly fluorescent Cdots by using ascorbic acid as a precursor. MnO₂ nanospheres of different sizes (large, medium, and small) were prepared by varying the ratio of concentration of methionine and KMnO₄ at room temperature, which was confirmed by HRTEM analysis. The successive addition of MnO₂ nanospheres in Cdots results fluorescence quenching. From the fluorescence intensity data, Stern-Volmer quenching constant values (KS-V) were evaluated. From the fluorescence intensity and lifetime analysis, it was found that the degree of fluorescence quenching of Cdots followed the order: large > medium > small. Moreover, fluorescence recovery studies were also performed in the presence of GSH. Fluorescence restoration studies also show the order of turn on follows the same order, i.e., large > medium > small, which was also confirmed by quantum yield and lifetime studies. The limits of detection (LOD) of GSH in presence of Cdots@different sized MnO₂ nanospheres were also evaluated. It was observed thatLOD values were in μM region and lowest in case of large MnO₂ nanospheres. The separation distance (d) between Cdots and the surface of different MnO₂ nanospheres was determined. The d values increase with increase in the size of the MnO₂ nanospheres. In summary, the synthesized Cdots@MnO₂ nanocomposites acted as a rapid, simple, economical as well as environmental-friendly nanosensor for the detection of GSH.

Keywords: carbon dots, fluorescence, glutathione, MnO₂ nanospheres, turn off-on

Procedia PDF Downloads 136
251 The Comparative Electroencephalogram Study: Children with Autistic Spectrum Disorder and Healthy Children Evaluate Classical Music in Different Ways

Authors: Galina Portnova, Kseniya Gladun

Abstract:

In our EEG experiment participated 27 children with ASD with the average age of 6.13 years and the average score for CARS 32.41 and 25 healthy children (of 6.35 years). Six types of musical stimulation were presented, included Gluck, Javier-Naida, Kenny G, Chopin and other classic musical compositions. Children with autism showed orientation reaction to the music and give behavioral responses to different types of music, some of them might assess stimulation by scales. The participants were instructed to remain calm. Brain electrical activity was recorded using a 19-channel EEG recording device, 'Encephalan' (Russia, Taganrog). EEG epochs lasting 150 s were analyzed using EEGLab plugin for MatLab (Mathwork Inc.). For EEG analysis we used Fast Fourier Transform (FFT), analyzed Peak alpha frequency (PAF), correlation dimension D2 and Stability of rhythms. To express the dynamics of desynchronizing of different rhythms we've calculated the envelope of the EEG signal, using the whole frequency range and a set of small narrowband filters using Hilbert transformation. Our data showed that healthy children showed similar EEG spectral changes during musical stimulation as well as described the feelings induced by musical fragments. The exception was the ‘Chopin. Prelude’ fragment (no.6). This musical fragment induced different subjective feeling, behavioral reactions and EEG spectral changes in children with ASD and healthy children. The correlation dimension D2 was significantly lower in autists compared to healthy children during musical stimulation. Hilbert envelope frequency was reduced in all group of subjects during musical compositions 1,3,5,6 compositions compared to the background. During musical fragments 2 and 4 (terrible) lower Hilbert envelope frequency was observed only in children with ASD and correlated with the severity of the disease. Alfa peak frequency was lower compared to the background during this musical composition in healthy children and conversely higher in children with ASD.

Keywords: electroencephalogram (EEG), emotional perception, ASD, musical perception, childhood Autism rating scale (CARS)

Procedia PDF Downloads 264
250 A Moving Target: Causative Factors for Geographic Variation in a Handed Flower

Authors: Celeste De Kock, Bruce Anderson, Corneile Minnaar

Abstract:

Geographic variation in the floral morphology of a flower species has often been assumed to result from co-variation in the availability of regionally-specific functional pollinator types, giving rise to plant ecotypes that are adapted to the morphology of the main pollinator types in that area. Wachendorfia paniculata is a geographically variable enantiostylous (handed) flower with preliminary observations suggesting that differences in pollinator community composition might be driving differences in the degree of herkogamy (spatial separation of the stigma and anthers on the same flower) across its geographic range. This study aimed to determine if pollinator-related variables such as visitation rate and pollinator type could explain differences in floral morphology seen in different populations. To assess pollinator community compositions, pollinator visitation rates, and the degree of herkogamy and flower size, flowers from 13 populations were observed and measured across the Western Cape, South Africa. Multiple regression analyses indicated that pollinator-related variables had no significant effect on the degree of herkogamy between sites. However, the degree of herkogamy was strongly negatively associated with the time of measurement. It remains possible that pollinators have had an effect on the development of herkogamy throughout the evolutionary timeline of different W. paniculata populations, but not necessarily to the fine-scale degree, as was predicted for this study. Annual fluctuations in pollinator community composition, paired with recent disturbances such as urbanization and the overabundance of artificially introduced honeybee hives, might also result in the signal of pollinator adaptation getting lost. Surprisingly, differences in herkogamy between populations could largely be explained by the time of day at which flowers were measured, suggesting a significant narrowing of the distance between reproductive parts throughout the day. We propose that this floral movement could possibly be an adaptation to ensure pollination if pollinator visitation to a flower was not sufficient earlier in the day, and will be explored in subsequent studies.

Keywords: enantiostyly, floral movement, geographic variation, ecotypes

Procedia PDF Downloads 262
249 Engineering of Reagentless Fluorescence Biosensors Based on Single-Chain Antibody Fragments

Authors: Christian Fercher, Jiaul Islam, Simon R. Corrie

Abstract:

Fluorescence-based immunodiagnostics are an emerging field in biosensor development and exhibit several advantages over traditional detection methods. While various affinity biosensors have been developed to generate a fluorescence signal upon sensing varying concentrations of analytes, reagentless, reversible, and continuous monitoring of complex biological samples remains challenging. Here, we aimed to genetically engineer biosensors based on single-chain antibody fragments (scFv) that are site-specifically labeled with environmentally sensitive fluorescent unnatural amino acids (UAA). A rational design approach resulted in quantifiable analyte-dependent changes in peak fluorescence emission wavelength and enabled antigen detection in vitro. Incorporation of a polarity indicator within the topological neighborhood of the antigen-binding interface generated a titratable wavelength blueshift with nanomolar detection limits. In order to ensure continuous analyte monitoring, scFv candidates with fast binding and dissociation kinetics were selected from a genetic library employing a high-throughput phage display and affinity screening approach. Initial rankings were further refined towards rapid dissociation kinetics using bio-layer interferometry (BLI) and surface plasmon resonance (SPR). The most promising candidates were expressed, purified to homogeneity, and tested for their potential to detect biomarkers in a continuous microfluidic-based assay. Variations of dissociation kinetics within an order of magnitude were achieved without compromising the specificity of the antibody fragments. This approach is generally applicable to numerous antibody/antigen combinations and currently awaits integration in a wide range of assay platforms for one-step protein quantification.

Keywords: antibody engineering, biosensor, phage display, unnatural amino acids

Procedia PDF Downloads 128
248 Silicon-Photonic-Sensor System for Botulinum Toxin Detection in Water

Authors: Binh T. T. Nguyen, Zhenyu Li, Eric Yap, Yi Zhang, Ai-Qun Liu

Abstract:

Silicon-photonic-sensor system is an emerging class of analytical technologies that use evanescent field wave to sensitively measure the slight difference in the surrounding environment. The wavelength shift induced by local refractive index change is used as an indicator in the system. These devices can be served as sensors for a wide variety of chemical or biomolecular detection in clinical and environmental fields. In our study, a system including a silicon-based micro-ring resonator, microfluidic channel, and optical processing is designed, fabricated for biomolecule detection. The system is demonstrated to detect Clostridium botulinum type A neurotoxin (BoNT) in different water sources. BoNT is one of the most toxic substances known and relatively easily obtained from a cultured bacteria source. The toxin is extremely lethal with LD50 of about 0.1µg/70kg intravenously, 1µg/ 70 kg by inhalation, and 70µg/kg orally. These factors make botulinum neurotoxins primary candidates as bioterrorism or biothreat agents. It is required to have a sensing system which can detect BoNT in a short time, high sensitive and automatic. For BoNT detection, silicon-based micro-ring resonator is modified with a linker for the immobilization of the anti-botulinum capture antibody. The enzymatic reaction is employed to increase the signal hence gains sensitivity. As a result, a detection limit to 30 pg/mL is achieved by our silicon-photonic sensor within a short period of 80 min. The sensor also shows high specificity versus the other type of botulinum. In the future, by designing the multifunctional waveguide array with fully automatic control system, it is simple to simultaneously detect multi-biomaterials at a low concentration within a short period. The system has a great potential to apply for online, real-time and high sensitivity for the label-free bimolecular rapid detection.

Keywords: biotoxin, photonic, ring resonator, sensor

Procedia PDF Downloads 99
247 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy

Authors: M. Regina Carreira-Lopez

Abstract:

Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.

Keywords: hypernymy, information retrieval, lightweight ontology, resonance

Procedia PDF Downloads 111
246 Sliding Mode Power System Stabilizer for Synchronous Generator Stability Improvement

Authors: J. Ritonja, R. Brezovnik, M. Petrun, B. Polajžer

Abstract:

Many modern synchronous generators in power systems are extremely weakly damped. The reasons are cost optimization of the machine building and introduction of the additional control equipment into power systems. Oscillations of the synchronous generators and related stability problems of the power systems are harmful and can lead to failures in operation and to damages. The only useful solution to increase damping of the unwanted oscillations represents the implementation of the power system stabilizers. Power system stabilizers generate the additional control signal which changes synchronous generator field excitation voltage. Modern power system stabilizers are integrated into static excitation systems of the synchronous generators. Available commercial power system stabilizers are based on linear control theory. Due to the nonlinear dynamics of the synchronous generator, current stabilizers do not assure optimal damping of the synchronous generator’s oscillations in the entire operating range. For that reason the use of the robust power system stabilizers which are convenient for the entire operating range is reasonable. There are numerous robust techniques applicable for the power system stabilizers. In this paper the use of sliding mode control for synchronous generator stability improvement is studied. On the basis of the sliding mode theory, the robust power system stabilizer was developed. The main advantages of the sliding mode controller are simple realization of the control algorithm, robustness to parameter variations and elimination of disturbances. The advantage of the proposed sliding mode controller against conventional linear controller was tested for damping of the synchronous generator oscillations in the entire operating range. Obtained results show the improved damping in the entire operating range of the synchronous generator and the increase of the power system stability. The proposed study contributes to the progress in the development of the advanced stabilizer, which will replace conventional linear stabilizers and improve damping of the synchronous generators.

Keywords: control theory, power system stabilizer, robust control, sliding mode control, stability, synchronous generator

Procedia PDF Downloads 209
245 DNA Methylation Changes in Response to Ocean Acidification at the Time of Larval Metamorphosis in the Edible Oyster, Crassostrea hongkongensis

Authors: Yong-Kian Lim, Khan Cheung, Xin Dang, Steven Roberts, Xiaotong Wang, Vengatesen Thiyagarajan

Abstract:

Unprecedented rate of increased CO₂ level in the ocean and the subsequent changes in carbonate system including decreased pH, known as ocean acidification (OA), is predicted to disrupt not only the calcification process but also several other physiological and developmental processes in a variety of marine organisms, including edible oysters. Nonetheless, not all species are vulnerable to those OA threats, e.g., some species may be able to cope with OA stress using environmentally induced modifications on gene and protein expressions. For example, external environmental stressors, including OA, can influence the addition and removal of methyl groups through epigenetic modification (e.g., DNA methylation) process to turn gene expression “on or off” as part of a rapid adaptive mechanism to cope with OA. In this study, the above hypothesis was tested through testing the effect of OA, using decreased pH 7.4 as a proxy, on the DNA methylation pattern of an endemic and a commercially important estuary oyster species, Crassostrea hongkongensis, at the time of larval habitat selection and metamorphosis. Larval growth rate did not differ between control pH 8.1 and treatment pH 7.4. The metamorphosis rate of the pediveliger larvae was higher at pH 7.4 than those in control pH 8.1; however, over one-third of the larvae raised at pH 7.4 failed to attach to an optimal substrate as defined by biofilm presence. During larval development, a total of 130 genes were differentially methylated across the two treatments. The differential methylation in the larval genes may have partially accounted for the higher metamorphosis success rate under decreased pH 7.4 but with poor substratum selection ability. Differentially methylated loci were concentrated in the exon regions and appear to be associated with cytoskeletal and signal transduction, oxidative stress, metabolic processes, and larval metamorphosis, which implies the high potential of C. hongkongensis larvae to acclimate and adapt through non-genetic ways to OA threats within a single generation.

Keywords: adaptive plasticity, DNA methylation, larval metamorphosis, ocean acidification

Procedia PDF Downloads 121
244 Camptothecin Promotes ROS-Mediated G2/M Phase Cell Cycle Arrest, Resulting from Autophagy-Mediated Cytoprotection

Authors: Rajapaksha Gedara Prasad Tharanga Jayasooriya, Matharage Gayani Dilshara, Yung Hyun Choi, Gi-Young Kim

Abstract:

Camptothecin (CPT) is a quinolone alkaloid which inhibits DNA topoisomerase I that induces cytotoxicity in a variety of cancer cell lines. We previously showed that CPT effectively inhibited invasion of prostate cancer cells and also combined treatment with subtoxic doses of CPT and TNF-related apoptosis-inducing ligand (TRAIL) potentially enhanced apoptosis in a caspase-dependent manner in hepatoma cancer cells. Here, we found that treatment with CPT caused an irreversible cell cycle arrest in the G2/M phase. CPT-induced cell cycle arrest was associated with a decrease in protein levels of cell division cycle 25C (Cdc25C) and increased the level of cyclin B and p21. The CPT-induced decrease in Cdc25C was blocked in the presence of proteasome inhibitor MG132, thus reversed the cell cycle arrest. In addition to that treatment of CPT-increased phosphorylation of Cdc25C was the resulted of activation of checkpoint kinase 2 (Chk2), which was associated with phosphorylation of ataxia telangiectasia-mutated. Interestingly CPT induced G2/M phase of the cell cycle arrest is reactive oxygen species (ROS) dependent where ROS inhibitors NAC and GSH reversed the CPT-induced cell cycle arrest. These results further confirm by using transient knockdown of nuclear factor-erythroid 2-related factor 2 (Nrf2) since it regulates the production of ROS. Our data reveal that treatment of siNrf2 increased the ROS level as well as further increased the CPT induce G2/M phase cell cycle arrest. Our data also indicate CPT-enhanced cell cycle arrest through the extracellular signal-regulated kinase (ERK) and the c-Jun N-terminal kinase (JNK) pathway. Inhibitors of ERK and JNK more decreased the Cdc25C expression and protein expression of p21 and cyclin B. These findings indicate that Chk2-mediated phosphorylation of Cdc25C plays a major role in G2/M arrest by CPT.

Keywords: camptothecin, cell cycle, checkpoint kinase 2, nuclear factor-erythroid 2-related factor 2, reactive oxygen species

Procedia PDF Downloads 418
243 Brain-Computer Interfaces That Use Electroencephalography

Authors: Arda Ozkurt, Ozlem Bozkurt

Abstract:

Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.

Keywords: BCI, EEG, non-invasive, spatial resolution

Procedia PDF Downloads 52
242 Pattern of Anisometropia, Management and Outcome of Anisometropic Amblyopia

Authors: Husain Rajib, T. H. Sheikh, D. G. Jewel

Abstract:

Background: Amblyopia is a frequent cause of monocular blindness in children. It can be unilateral or bilateral reduction of best corrected visual acuity associated with decrement in visual processing, accomodation, motility, spatial perception or spatial projection. Anisometropia is an important risk factor for amblyopia that develops when unequal refractive error causes the image to be blurred in the critical developmental period and central inhibition of the visual signal originating from the affected eye associated with significant visual problems including anisokonia, strabismus, and reduced stereopsis. Methods: It is a prospective hospital based study of newly diagnosed of amblyopia seen at the pediatric clinic of Chittagong Eye Infirmary & Training Complex. There were 50 anisometropic amblyopia subjects were examined & questionnaire was piloted. Included were all patients diagnosed with refractive amblyopia between 3 to 13 years, without previous amblyopia treatment, and whose parents were interested to participate in the study. Patients diagnosed with strabismic amblyopia were excluded. Patients were first corrected with the best correction for a month. When the VA in the amblyopic eye did not improve over month, then occlusion treatment was started. Occlusion was done daily for 6-8 hours (full time) together with vision therapy. The occlusion was carried out for 3 months. Results: In this study about 8% subjects had anisometropia from myopia, 18% from hyperopia, 74% from astigmatism. The initial mean visual acuity was 0.74 ± 0.39 Log MAR and after intervention of amblyopia therapy with active vision therapy mean visual acuity was 0.34 ± 0.26 Log MAR. About 94% of subjects were improving at least two lines. The depth of amblyopia associated with type of anisometropic refractive error and magnitude of Anisometropia (p<0.005). By doing this study 10% mild amblyopia, 64% moderate and 26% severe amblyopia were found. Binocular function also decreases with magnitude of Anisometropia. Conclusion: Anisometropic amblyopia is a most important factor in pediatric age group because it can lead to visual impairment. Occlusion therapy with at least one instructed hour of active visual activity practiced out of school hours was effective in anisometropic amblyopes who were diagnosed at the age of 8 years and older, and the patients complied well with the treatment.

Keywords: refractive error, anisometropia, amblyopia, strabismic amblyopia

Procedia PDF Downloads 260
241 Neural Correlates of Attention Bias to Threat during the Emotional Stroop Task in Schizophrenia

Authors: Camellia Al-Ibrahim, Jenny Yiend, Sukhwinder S. Shergill

Abstract:

Background: Attention bias to threat play a role in the development, maintenance, and exacerbation of delusional beliefs in schizophrenia in which patients emphasize the threatening characteristics of stimuli and prioritise them for processing. Cognitive control deficits arise when task-irrelevant emotional information elicits attentional bias and obstruct optimal performance. This study is investigating neural correlates of interference effect of linguistic threat and whether these effects are independent of delusional severity. Methods: Using an event-related functional magnetic resonance imaging (fMRI), neural correlates of interference effect of linguistic threat during the emotional Stroop task were investigated and compared patients with schizophrenia with high (N=17) and low (N=16) paranoid symptoms and healthy controls (N=20). Participants were instructed to identify the font colour of each word presented on the screen as quickly and accurately as possible. Stimuli types vary between threat-relevant, positive and neutral words. Results: Group differences in whole brain effects indicate decreased amygdala activity in patients with high paranoid symptoms compared with low paranoid patients and healthy controls. Regions of interest analysis (ROI) validated our results within the amygdala and investigated changes within the striatum showing a pattern of reduced activation within the clinical group compared to healthy controls. Delusional severity was associated with significant decreased neural activity in the striatum within the clinical group. Conclusion: Our findings suggest that the emotional interference mediated by the amygdala and striatum may reduce responsiveness to threat-related stimuli in schizophrenia and that attenuation of fMRI Blood-oxygen-level dependent (BOLD) signal within these areas might be influenced by the severity of delusional symptoms.

Keywords: attention bias, fMRI, Schizophrenia, Stroop

Procedia PDF Downloads 178
240 Learning the History of a Tuscan Village: A Serious Game Using Geolocation Augmented Reality

Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti

Abstract:

An important tool for the enhancement of cultural sites is serious games (SG), i.e., games designed for educational purposes; SG is applied in cultural sites through trivia, puzzles, and mini-games for participation in interactive exhibitions, mobile applications, and simulations of past events. The combination of Augmented Reality (AR) and digital cultural content has also produced examples of cultural heritage recovery and revitalization around the world. Through AR, the user perceives the information of the visited place in a more real and interactive way. Another interesting technological development for the revitalization of cultural sites is the combination of AR and Global Positioning System (GPS), which integrated have the ability to enhance the user's perception of reality by providing historical and architectural information linked to specific locations organized on a route. To the author’s best knowledge, there are currently no applications that combine GPS AR and SG for cultural heritage revitalization. The present research focused on the development of an SG based on GPS and AR. The study area is the village of Caldana in Tuscany, Italy. Caldana is a fortified Renaissance village; the most important architectures are the walls, the church of San Biagio, the rectory, and the marquis' palace. The historical information is derived from extensive research by the Department of Architecture at the University of Florence. The storyboard of the SG is based on the history of the three characters who built the village: marquis Marcello Agostini, who was commissioned by Cosimo I de Medici, Grand Duke of Tuscany, to build the village, his son Ippolito and his architect Lorenzo Pomarelli. The three historical characters were modeled in 3D using the freeware MakeHuman and imported into Blender and Mixamo to associate a skeleton and blend shapes to have gestural animations and reproduce lip movement during speech. The Unity Rhubarb Lip Syncer plugin was used for the lip sync animation. The historical costumes were created by Marvelous Designer. The application was developed using the Unity 3D graphics and game engine. The AR+GPS Location plugin was used to position the 3D historical characters based on GPS coordinates. The ARFoundation library was used to display AR content. The SG is available in two versions: for children and adults. the children's version consists of finding a digital treasure consisting of valuable items and historical rarities. Players must find 9 village locations where 3D AR models of historical figures explaining the history of the village provide clues. To stimulate players, there are 3 levels of rewards for every 3 clues discovered. The rewards consist of AR masks for archaeologist, professor, and explorer. At the adult level, the SG consists of finding the 16 historical landmarks in the village, and learning historical and architectural information interactively and engagingly. The application is being tested on a sample of adults and children. Test subjects will be surveyed on a Likert scale to find out their perceptions of using the app and the learning experience between the guided tour and interaction with the app.

Keywords: augmented reality, cultural heritage, GPS, serious game

Procedia PDF Downloads 81
239 Leveraging Automated and Connected Vehicles with Deep Learning for Smart Transportation Network Optimization

Authors: Taha Benarbia

Abstract:

The advent of automated and connected vehicles has revolutionized the transportation industry, presenting new opportunities for enhancing the efficiency, safety, and sustainability of our transportation networks. This paper explores the integration of automated and connected vehicles into a smart transportation framework, leveraging the power of deep learning techniques to optimize the overall network performance. The first aspect addressed in this paper is the deployment of automated vehicles (AVs) within the transportation system. AVs offer numerous advantages, such as reduced congestion, improved fuel efficiency, and increased safety through advanced sensing and decisionmaking capabilities. The paper delves into the technical aspects of AVs, including their perception, planning, and control systems, highlighting the role of deep learning algorithms in enabling intelligent and reliable AV operations. Furthermore, the paper investigates the potential of connected vehicles (CVs) in creating a seamless communication network between vehicles, infrastructure, and traffic management systems. By harnessing real-time data exchange, CVs enable proactive traffic management, adaptive signal control, and effective route planning. Deep learning techniques play a pivotal role in extracting meaningful insights from the vast amount of data generated by CVs, empowering transportation authorities to make informed decisions for optimizing network performance. The integration of deep learning with automated and connected vehicles paves the way for advanced transportation network optimization. Deep learning algorithms can analyze complex transportation data, including traffic patterns, demand forecasting, and dynamic congestion scenarios, to optimize routing, reduce travel times, and enhance overall system efficiency. The paper presents case studies and simulations demonstrating the effectiveness of deep learning-based approaches in achieving significant improvements in network performance metrics

Keywords: automated vehicles, connected vehicles, deep learning, smart transportation network

Procedia PDF Downloads 53