Search results for: target
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2646

Search results for: target

2196 The Use of Educational Language Games

Authors: April Love Palad, Charita B. Lasala

Abstract:

Mastery on English language is one of the important goals of all English language teachers. This goal can be seen based from the students’ actual performance using the target language which is English. Learning the English language includes hard work where efforts need to be exerted and this can be attained gradually over a long period of time. It is extremely important for all English language teachers to know the effects of incorporating games in teaching. Whether this strategy can have positive or negative effects in students learning, teachers should always consider what is best for their learners. Games may help and provide confidents language learners. These games help teachers to create context in which the language is suitable and significant. Focusing in accuracy and fluency is the heart of this study and this will be obtain in either teaching English using the traditional method or teaching English using language games. It is very important for all English teachers to know which strategy is effective in teaching English to be able to cope with students’ underachievement in this subject. This study made use of the comparative-experimental method. It made use of the pre-post test design with the aim to explore the effectiveness of the language games as strategy used in language teaching for high school students. There were two groups of students being observed, the controlled and the experimental, employing the two strategies in teaching English –traditional and with the use of language games. The scores obtained by two samples were compared to know the effectiveness of the two strategies in teaching English. In this study, it found out that language games help improve students’ fluency and accuracy in the use of target language and this is very evident in the results obtained in the pre-test and post –test result as well the mean gain scores by the two groups of students. In addition, this study also gives us a clear view on the positive effects on the use of language games in teaching which also supported by the related studies based from this research. The findings of the study served as the bases for the creation of the proposed learning plan that integrated language games that teachers may use in their own teaching. This study further concluded that language games are effective in developing students’ fluency in using the English language. This justifies that games help encourage students to learn and be entertained at the same time. Aside from that, games also promote developing language competency. This study will be very useful to teachers who are in doubt in the use of this strategy in their teaching.

Keywords: language games, experimental, comparative, strategy, language teaching, methodology

Procedia PDF Downloads 394
2195 A Religious Book Translation by Pragmatic Approach: The Vajrachedika-Prajna-Paramita Sutra

Authors: Yoon-Cheol Park

Abstract:

This research focuses on examining the Chinese character-Korean language translation of the Vajrachedika-prajna-paramita sutra by a pragmatic approach. The background of this research is that there were no previous researches which looked into the Vajrachedika-prajna-paramita translation by pragmatic approach until now. Even though it is composed of conversational structures between Buddha and his disciple unlike other Buddhist sutras, most of its translation could find the traces to have pursued literal translation and still has now overlooked pragmatic elements in it. Accordingly, it is meaningful to examine the messages through speaker and hearer relation and between speaker intention and utterance meaning. Practically, the Vajrachedika-prajna-paramita sutra includes pragmatic elements, such as speech acts, presupposition, conversational implicature, the cooperative principle and politeness. First, speech acts in its sutra text show the translation to reveal obvious performance meanings of language to the target text. And presupposition in their dialogues is conveyed by paraphrasing or substituting abstruse language with easy expressions. Conversational implicature in utterances makes it possible to understand the meanings of holy words by relying on utterance contexts. In particular, relevance results in an increase of readability in the translation owing to previous utterance contexts. Finally, politeness in the target text is conveyed with natural stylistics through the honorific system of the Korean language. These elements mean that the pragmatic approach can function as a useful device in conveying holy words in a specific, practical and direct way depending on utterance contexts. Therefore, we expect that taking a pragmatic approach in translating the Vajrachedika-prajna-paramita sutra will provide a theoretical foundation for seeking better translation methods than the literal translations of the past. And it implies that the translation of Buddhist sutra needs to convey messages by translation methods which take into account the characteristic of sutra text like the Vajrachedika-prajna-paramita.

Keywords: buddhist sutra, Chinese character-Korean language translation, pragmatic approach, utterance context

Procedia PDF Downloads 382
2194 Investigating the Online Effect of Language on Gesture in Advanced Bilinguals of Two Structurally Different Languages in Comparison to L1 Native Speakers of L2 and Explores Whether Bilinguals Will Follow Target L2 Patterns in Speech and Co-speech

Authors: Armita Ghobadi, Samantha Emerson, Seyda Ozcaliskan

Abstract:

Being a bilingual involves mastery of both speech and gesture patterns in a second language (L2). We know from earlier work in first language (L1) production contexts that speech and co-speech gesture form a tightly integrated system: co-speech gesture mirrors the patterns observed in speech, suggesting an online effect of language on nonverbal representation of events in gesture during the act of speaking (i.e., “thinking for speaking”). Relatively less is known about the online effect of language on gesture in bilinguals speaking structurally different languages. The few existing studies—mostly with small sample sizes—suggests inconclusive findings: some show greater achievement of L2 patterns in gesture with more advanced L2 speech production, while others show preferences for L1 gesture patterns even in advanced bilinguals. In this study, we focus on advanced bilingual speakers of two structurally different languages (Spanish L1 with English L2) in comparison to L1 English speakers. We ask whether bilingual speakers will follow target L2 patterns not only in speech but also in gesture, or alternatively, follow L2 patterns in speech but resort to L1 patterns in gesture. We examined this question by studying speech and gestures produced by 23 advanced adult Spanish (L1)-English (L2) bilinguals (Mage=22; SD=7) and 23 monolingual English speakers (Mage=20; SD=2). Participants were shown 16 animated motion event scenes that included distinct manner and path components (e.g., "run over the bridge"). We recorded and transcribed all participant responses for speech and segmented it into sentence units that included at least one motion verb and its associated arguments. We also coded all gestures that accompanied each sentence unit. We focused on motion event descriptions as it shows strong crosslinguistic differences in the packaging of motion elements in speech and co-speech gesture in first language production contexts. English speakers synthesize manner and path into a single clause or gesture (he runs over the bridge; running fingers forward), while Spanish speakers express each component separately (manner-only: el corre=he is running; circle arms next to body conveying running; path-only: el cruza el puente=he crosses the bridge; trace finger forward conveying trajectory). We tallied all responses by group and packaging type, separately for speech and co-speech gesture. Our preliminary results (n=4/group) showed that productions in English L1 and Spanish L1 differed, with greater preference for conflated packaging in L1 English and separated packaging in L1 Spanish—a pattern that was also largely evident in co-speech gesture. Bilinguals’ production in L2 English, however, followed the patterns of the target language in speech—with greater preference for conflated packaging—but not in gesture. Bilinguals used separated and conflated strategies in gesture in roughly similar rates in their L2 English, showing an effect of both L1 and L2 on co-speech gesture. Our results suggest that online production of L2 language has more limited effects on L2 gestures and that mastery of native-like patterns in L2 gesture might take longer than native-like L2 speech patterns.

Keywords: bilingualism, cross-linguistic variation, gesture, second language acquisition, thinking for speaking hypothesis

Procedia PDF Downloads 49
2193 Association of the Time in Targeted Blood Glucose Range of 3.9–10 Mmol/L with the Mortality of Critically Ill Patients with or without Diabetes

Authors: Guo Yu, Haoming Ma, Peiru Zhou

Abstract:

BACKGROUND: In addition to hyperglycemia, hypoglycemia, and glycemic variability, a decrease in the time in the targeted blood glucose range (TIR) may be associated with an increased risk of death for critically ill patients. However, the relationship between the TIR and mortality may be influenced by the presence of diabetes and glycemic variability. METHODS: A total of 998 diabetic and non-diabetic patients with severe diseases in the ICU were selected for this retrospective analysis. The TIR is defined as the percentage of time spent in the target blood glucose range of 3.9–10.0 mmol/L within 24 hours. The relationship between TIR and in-hospital in diabetic and non-diabetic patients was analyzed. The effect of glycemic variability was also analyzed. RESULTS: The binary logistic regression model showed that there was a significant association between the TIR as a continuous variable and the in-hospital death of severely ill non-diabetic patients (OR=0.991, P=0.015). As a classification variable, TIR≥70% was significantly associated with in-hospital death (OR=0.581, P=0.003). Specifically, TIR≥70% was a protective factor for the in-hospital death of severely ill non-diabetic patients. The TIR of severely ill diabetic patients was not significantly associated with in-hospital death; however, glycemic variability was significantly and independently associated with in-hospital death (OR=1.042, P=0.027). Binary logistic regression analysis of comprehensive indices showed that for non-diabetic patients, the C3 index (low TIR & high CV) was a risk factor for increased mortality (OR=1.642, P<0.001). In addition, for diabetic patients, the C3 index was an independent risk factor for death (OR=1.994, P=0.008), and the C4 index (low TIR & low CV) was independently associated with increased survival. CONCLUSIONS: The TIR of non-diabetic patients during ICU hospitalization was associated with in-hospital death even after adjusting for disease severity and glycemic variability. There was no significant association between the TIR and mortality of diabetic patients. However, for both diabetic and non-diabetic critically ill patients, the combined effect of high TIR and low CV was significantly associated with ICU mortality. Diabetic patients seem to have higher blood glucose fluctuations and can tolerate a large TIR range. Both diabetic and non-diabetic critically ill patients should maintain blood glucose levels within the target range to reduce mortality.

Keywords: severe disease, diabetes, blood glucose control, time in targeted blood glucose range, glycemic variability, mortality

Procedia PDF Downloads 197
2192 Comparison of Water Equivalent Ratio of Several Dosimetric Materials in Proton Therapy Using Monte Carlo Simulations and Experimental Data

Authors: M. R. Akbari , H. Yousefnia, E. Mirrezaei

Abstract:

Range uncertainties of protons are currently a topic of interest in proton therapy. Two of the parameters that are often used to specify proton range are water equivalent thickness (WET) and water equivalent ratio (WER). Since WER values for a specific material is nearly constant at different proton energies, it is a more useful parameter to compare. In this study, WER values were calculated for different proton energies in polymethyl methacrylate (PMMA), polystyrene (PS) and aluminum (Al) using FLUKA and TRIM codes. The results were compared with analytical, experimental and simulated SEICS code data obtained from the literature. In FLUKA simulation, a cylindrical phantom, 1000 mm in height and 300 mm in diameter, filled with the studied materials was simulated. A typical mono-energetic proton pencil beam in a wide range of incident energies usually applied in proton therapy (50 MeV to 225 MeV) impinges normally on the phantom. In order to obtain the WER values for the considered materials, cylindrical detectors, 1 mm in height and 20 mm in diameter, were also simulated along the beam trajectory in the phantom. In TRIM calculations, type of projectile, energy and angle of incidence, type of target material and thickness should be defined. The mode of 'detailed calculation with full damage cascades' was selected for proton transport in the target material. The biggest difference in WER values between the codes was 3.19%, 1.9% and 0.67% for Al, PMMA and PS, respectively. In Al and PMMA, the biggest difference between each code and experimental data was 1.08%, 1.26%, 2.55%, 0.94%, 0.77% and 0.95% for SEICS, FLUKA and SRIM, respectively. FLUKA and SEICS had the greatest agreement (≤0.77% difference in PMMA and ≤1.08% difference in Al, respectively) with the available experimental data in this study. It is concluded that, FLUKA and TRIM codes have capability for Bragg curves simulation and WER values calculation in the studied materials. They can also predict Bragg peak location and range of proton beams with acceptable accuracy.

Keywords: water equivalent ratio, dosimetric materials, proton therapy, Monte Carlo simulations

Procedia PDF Downloads 293
2191 Determination of Non-CO2 Greenhouse Gas Emission in Electronics Industry

Authors: Bong Jae Lee, Jeong Il Lee, Hyo Su Kim

Abstract:

Both developed and developing countries have adopted the decision to join the Paris agreement to reduce greenhouse gas (GHG) emissions at the Conference of the Parties (COP) 21 meeting in Paris. As a result, the developed and developing countries have to submit the Intended Nationally Determined Contributions (INDC) by 2020, and each country will be assessed for their performance in reducing GHG. After that, they shall propose a reduction target which is higher than the previous target every five years. Therefore, an accurate method for calculating greenhouse gas emissions is essential to be presented as a rational for implementing GHG reduction measures based on the reduction targets. Non-CO2 GHGs (CF4, NF3, N2O, SF6 and so on) are being widely used in fabrication process of semiconductor manufacturing, and etching/deposition process of display manufacturing process. The Global Warming Potential (GWP) value of Non-CO2 is much higher than CO2, which means it will have greater effect on a global warming than CO2. Therefore, GHG calculation methods of the electronics industry are provided by Intergovernmental Panel on climate change (IPCC) and U.S. Environmental Protection Agency (EPA), and it will be discussed at ISO/TC 146 meeting. As discussed earlier, being precise and accurate in calculating Non-CO2 GHG is becoming more important. Thus this study aims to discuss the implications of the calculating methods through comparing the methods of IPCC and EPA. As a conclusion, after analyzing the methods of IPCC & EPA, the method of EPA is more detailed and it also provides the calculation for N2O. In case of the default emission factor (by IPCC & EPA), IPCC provides more conservative results compared to that of EPA; The factor of IPCC was developed for calculating a national GHG emission, while the factor of EPA was specifically developed for the U.S. which means it must have been developed to address the environmental issue of the US. The semiconductor factory ‘A’ measured F gas according to the EPA Destruction and Removal Efficiency (DRE) protocol and estimated their own DRE, and it was observed that their emission factor shows higher DRE compared to default DRE factor of IPCC and EPA Therefore, each country can improve their GHG emission calculation by developing its own emission factor (if possible) at the time of reporting Nationally Determined Contributions (NDC). Acknowledgements: This work was supported by the Korea Evaluation Institute of Industrial Technology (No. 10053589).

Keywords: non-CO2 GHG, GHG emission, electronics industry, measuring method

Procedia PDF Downloads 267
2190 Zingiberaceous Plants as a Source of Anti-Bacterial Activity: Targeting Bacterial Cell Division Protein (FtsZ)

Authors: S. Reshma Reghu, Shiburaj Sugathan, T. G. Nandu, K. B. Ramesh Kumar, Mathew Dan

Abstract:

Bacterial diseases are considered to be one of the most prevalent health hazards in the developing world and many bacteria are becoming resistant to existing antibiotics making the treatment ineffective. Thus, it is necessary to find novel targets and develop new antibacterial drugs with a novel mechanism of action. The process of bacterial cell division is a novel and attractive target for new antibacterial drug discovery. FtsZ, a homolog of eukaryotic tubulin, is the major protein of the bacterial cell division machinery and is considered as an important antibacterial drug target. Zingiberaceae, the Ginger family consists of aromatic herbs with creeping rhizomes. Many of these plants have antimicrobial properties.This study aimed to determine the anti-bacterial activity of selected Zingiberaceous plants by targeting bacterial cell division protein, FtsZ. Essential oils and methanol extracts of Amomum ghaticum, Alpinia galanga, Kaempferia galanga, K. rotunda, and Zingiber officinale were tested to find its antibacterial efficiency using disc diffusion method against authentic bacterial strains obtained from MTCC (India). Essential oil isolated from A.galanga and Z.officinale were further assayed for FtsZ inhibition assay following non-radioactive malachite green-phosphomolybdate assay using E. coli FtsZ protein obtained from Cytoskelton Inc., USA. Z.officinale essential oil possess FtsZ inhibitory property. A molecular docking study was conducted with the known bioactive compounds of Z. officinale as ligands with the E. coli FtsZ protein homology model. Some of the major constituents of this plant like catechin, epicatechin, and gingerol possess agreeable docking scores. The results of this study revealed that several chemical constituents in Ginger plants can be utilised as potential source of antibacterial activity and it can warrant further investigation through drug discovery studies.

Keywords: antibacterial, FtsZ, zingiberaceae, docking

Procedia PDF Downloads 449
2189 Development of an Automatic Control System for ex vivo Heart Perfusion

Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala

Abstract:

Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.

Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller

Procedia PDF Downloads 147
2188 Psychological Factors Predicting Social Distance during the COVID-19 Pandemic: An Empirical Investigation

Authors: Calogero Lo Destro

Abstract:

Numerous nations around the world are facing exceptional challenges in employing measures to stop the spread of COVID-19. Following the recommendations of the World Health Organization, a series of preventive measures have been adopted. However, individuals must comply with these rules and recommendations in order to make these measures effective. While COVID-19 was climaxing, it seemed of crucial importance to analyze which psychosocial factors contribute to the acceptance of such preventive behavior, thus favoring the management of COVID-19 worldwide health crisis. In particular, the identification of aspects related to obstacles and facilitation of adherence to social distancing has been considered crucial in the containment of the virus spread. Since the virus was firstly detected in China, Asian people could be considered a relevant outgroup targeted for exclusion. We also hypothesized social distance could be influenced by characteristics of the target, such as smiling or coughing. 260 participants participated in this research on a voluntary basis. They filled a survey designed to explore a series of COVID-19 measures (such as exposure to virus and fear of infection). We also assessed participants state and trait anxiety. The dependent variable was social distance, based on a measure of seating distance designed ad hoc for the present work. Our hypothesis that participants could report greater distance in response to Asian people was not confirmed. On the other hand, significantly lower distance in response to smiling compared to coughing targets was reported. Adopting a regression analysis model, we found that participants' social distance, in response to both coughing and smiling targets, was predicted by fear of infection and by the perception COVID-19 could become a pandemic. Social distance in response to the coughing target was also significantly and positively predicted by age and state anxiety. In summary, the present work has sought to identify a set of psychological variables, which may still be predictive of social distancing.

Keywords: COVID-19, social distancing, health, preventive behaviors, risk of infection

Procedia PDF Downloads 102
2187 Plasmonic Biosensor for Early Detection of Environmental DNA (eDNA) Combined with Enzyme Amplification

Authors: Monisha Elumalai, Joana Guerreiro, Joana Carvalho, Marta Prado

Abstract:

DNA biosensors popularity has been increasing over the past few years. Traditional analytical techniques tend to require complex steps and expensive equipment however DNA biosensors have the advantage of getting simple, fast and economic. Additionally, the combination of DNA biosensors with nanomaterials offers the opportunity to improve the selectivity, sensitivity and the overall performance of the devices. DNA biosensors are based on oligonucleotides as sensing elements. These oligonucleotides are highly specific to complementary DNA sequences resulting in the hybridization of the strands. DNA biosensors are not only an advantage in the clinical field but also applicable in numerous research areas such as food analysis or environmental control. Zebra Mussels (ZM), Dreissena polymorpha are invasive species responsible for enormous negative impacts on the environment and ecosystems. Generally, the detection of ZM is made when the observation of adult or macroscopic larvae's is made however at this stage is too late to avoid the harmful effects. Therefore, there is a need to develop an analytical tool for the early detection of ZM. Here, we present a portable plasmonic biosensor for the detection of environmental DNA (eDNA) released to the environment from this invasive species. The plasmonic DNA biosensor combines gold nanoparticles, as transducer elements, due to their great optical properties and high sensitivity. The detection strategy is based on the immobilization of a short base pair DNA sequence on the nanoparticles surface followed by specific hybridization in the presence of a complementary target DNA. The hybridization events are tracked by the optical response provided by the nanospheres and their surrounding environment. The identification of the DNA sequences (synthetic target and probes) to detect Zebra mussel were designed by using Geneious software in order to maximize the specificity. Moreover, to increase the optical response enzyme amplification of DNA might be used. The gold nanospheres were synthesized and characterized by UV-visible spectrophotometry and transmission electron microscopy (TEM). The obtained nanospheres present the maximum localized surface plasmon resonance (LSPR) peak position are found to be around 519 nm and a diameter of 17nm. The DNA probes modified with a sulfur group at one end of the sequence were then loaded on the gold nanospheres at different ionic strengths and DNA probe concentrations. The optimal DNA probe loading will be selected based on the stability of the optical signal followed by the hybridization study. Hybridization process leads to either nanoparticle dispersion or aggregation based on the presence or absence of the target DNA. Finally, this detection system will be integrated into an optical sensing platform. Considering that the developed device will be used in the field, it should fulfill the inexpensive and portability requirements. The sensing devices based on specific DNA detection holds great potential and can be exploited for sensing applications in-loco.

Keywords: ZM DNA, DNA probes, nicking enzyme, gold nanoparticles

Procedia PDF Downloads 215
2186 Quality Assurances for an On-Board Imaging System of a Linear Accelerator: Five Months Data Analysis

Authors: Liyun Chang, Cheng-Hsiang Tsai

Abstract:

To ensure the radiation precisely delivering to the target of cancer patients, the linear accelerator equipped with the pretreatment on-board imaging system is introduced and through it the patient setup is verified before the daily treatment. New generation radiotherapy using beam-intensity modulation, usually associated the treatment with steep dose gradients, claimed to have achieved both a higher degree of dose conformation in the targets and a further reduction of toxicity in normal tissues. However, this benefit is counterproductive if the beam is delivered imprecisely. To avoid shooting critical organs or normal tissues rather than the target, it is very important to carry out the quality assurance (QA) of this on-board imaging system. The QA of the On-Board Imager® (OBI) system of one Varian Clinac-iX linear accelerator was performed through our procedures modified from a relevant report and AAPM TG142. Two image modalities, 2D radiography and 3D cone-beam computed tomography (CBCT), of the OBI system were examined. The daily and monthly QA was executed for five months in the categories of safety, geometrical accuracy and image quality. A marker phantom and a blade calibration plate were used for the QA of geometrical accuracy, while the Leeds phantom and Catphan 504 phantom were used in the QA of radiographic and CBCT image quality, respectively. The reference images were generated through a GE LightSpeed CT simulator with an ADAC Pinnacle treatment planning system. Finally, the image quality was analyzed via an OsiriX medical imaging system. For the geometrical accuracy test, the average deviations of the OBI isocenter in each direction are less than 0.6 mm with uncertainties less than 0.2 mm, while all the other items have the displacements less than 1 mm. For radiographic image quality, the spatial resolution is 1.6 lp/cm with contrasts less than 2.2%. The spatial resolution, low contrast, and HU homogenous of CBCT are larger than 6 lp/cm, less than 1% and within 20 HU, respectively. All tests are within the criteria, except the HU value of Teflon measured with the full fan mode exceeding the suggested value that could be due to itself high HU value and needed to be rechecked. The OBI system in our facility was then demonstrated to be reliable with stable image quality. The QA of OBI system is really necessary to achieve the best treatment for a patient.

Keywords: CBCT, image quality, quality assurance, OBI

Procedia PDF Downloads 272
2185 The Integration Process of Non-EU Citizens in Luxembourg: From an Empirical Approach Toward a Theoretical Model

Authors: Angela Odero, Chrysoula Karathanasi, Michèle Baumann

Abstract:

Integration of foreign communities has been a forefront issue in Luxembourg for some time now. The country’s continued progress depends largely on the successful integration of immigrants. The aim of our study was to analyze factors which intervene in the course of integration of Non-EU citizens through the discourse of Non-EU citizens residing in Luxembourg, who have signed the Welcome and Integration Contract (CAI). The two-year contract offers integration services to assist foreigners in getting settled in the country. Semi-structured focus group discussions with 50 volunteers were held in English, French, Spanish, Serbo-Croatian or Chinese. Participants were asked to talk about their integration experiences. Recorded then transcribed, the transcriptions were analyzed with the help of NVivo 10, a qualitative analysis software. A systematic and reiterative analysis of decomposing and reconstituting was realized through (1) the identification of predetermined categories (difficulties, challenges and integration needs) (2) initial coding – the grouping together of similar ideas (3) axial coding – the regrouping of items from the initial coding in new ways in order to create sub-categories and identify other core dimensions. Our results show that intervening factors include language acquisition, professional career and socio-cultural activities or events. Each of these factors constitutes different components whose weight shifts from person to person and from situation to situation. Connecting these three emergent factors are two elements essential to the success of the immigrant’s integration – the role of time and deliberate effort from the immigrants, the community, and the formal institutions charged with helping immigrants integrate. We propose a theoretical model where the factors described may be classified in terms of how they predispose, facilitate, and / or reinforce the process towards a successful integration. Measures currently in place propose one size fits all programs yet integrative measures which target the family unit and those customized to target groups based on their needs would work best.

Keywords: integration, integration services, non-eu citizens, qualitative analysis, third country nationals

Procedia PDF Downloads 285
2184 The Survey of Sea Cucumber Fisheries in QESHM Island Coasts: Persian Gulf

Authors: Majid Afkhami, Maryam Ehsanpour, Rastin Afkhami

Abstract:

Sea cucumbers are aquatic animals with a wide variety useful for human health. Sea cucumbers are from the aquatic creatures that have many important and useful properties known for human health. Increasing demand for beche-de-mer along with steady price increases have led to worldwide intensification of sea cucumber harvesting. The rearing of sea cucumber with shrimp controls the environmental pollution results from extra enriched nutritious built on the pond bottom. These animals eat detritus and with devouring of organic materials on the surface, not only do they make the environment clean, but also they cause the fast growth of shrimp and themselves. Holothuria scabra is a main species for producing of Beche-de-mer and more exploited in tropical region of the world. The wall of body is used in the process of beche-de-mer production that forms the 56% of the whole body. Holothuria scabra (sandfish) is an aspidochirote holothurian widely distributed in coastal regions throughout the Indo-Pacific region. H. scabra is often found on inner reef flats and near estuaries, half buried in the silt sand during the day and emerging at night to feed. In this study upon to information from local fishermen's in Qeshm island, we Providing some data about fishing methods, processing and distribution in the Qeshm island coastline. Comparative study of fishing status with another part of the world determined that the status of sea cucumber stocks in Qeshm Island is suitable. For preventing of over exploited of sandy sea cucumber capture prohibition should be continue. In this study, 7 explotide sites are recognized, the target size for fishermen's was more than 20 cm and sandy cucumber was the target species in Qeshm Island. In this area the fishing operation was only done by scuba diving and has been done only by men's. Although in another countries women's have an important role in sea cucumber fishing operation. In the coast around Qeshm island it is found in Hmoon, Tolla, kovei, Ramchah, Messen, and Hengam. The maximum length and weight was recorded 35 cm and 1080 gr, respectively.

Keywords: sea cucumber, Holothuria scabra, fishing status, Qeshm Island

Procedia PDF Downloads 421
2183 Adaptor Protein APPL2 Could Be a Therapeutic Target for Improving Hippocampal Neurogenesis and Attenuating Depressant Behaviors and Olfactory Dysfunctions in Chronic Corticosterone-induced Depression

Authors: Jiangang Shen

Abstract:

Olfactory dysfunction is a common symptom companied by anxiety- and depressive-like behaviors in depressive patients. Chronic stress triggers hormone responses and inhibits the proliferation and differentiation of neural stem cells (NSCs) in the hippocampus and subventricular zone (SVZ)-olfactory bulb (OB), contributing to depressive behaviors and olfactory dysfunction. However, the cellular signaling molecules to regulate chronic stress mediated olfactory dysfunction are largely unclear. Adaptor proteins containing the pleckstrin homology domain, phosphotyrosine binding domain, and leucine zipper motif (APPLs) are multifunctional adaptor proteins. Herein, we tested the hypothesis that APPL2 could inhibit hippocampal neurogenesis by affecting glucocorticoid receptor (GR) signaling, subsequently contributing to depressive and anxiety behaviors as well as olfactory dysfunctions. The major discoveries are included: (1) APPL2 Tg mice had enhanced GR phosphorylation under basic conditions but had no different plasma corticosterone (CORT) level and GR phosphorylation under stress stimulation. (2) APPL2 Tg mice had impaired hippocampal neurogenesis and revealed depressive and anxiety behaviors. (3) GR antagonist RU486 reversed the impaired hippocampal neurogenesis in the APPL2 Tg mice. (4) APPL2 Tg mice displayed higher GR activity and less capacity for neurogenesis at the olfactory system with lesser olfactory sensitivity than WT mice. (5) APPL2 negatively regulates olfactory functions by switching fate commitments of NSCs in adult olfactory bulbs via interaction with Notch1 signaling. Furthermore, baicalin, a natural medicinal compound, was found to be a promising agent targeting APPL2/GR signaling and promoting adult neurogenesis in APPL2 Tg mice and chronic corticosterone-induced depression mouse models. Behavioral tests revealed that baicalin had antidepressant and olfactory-improving effects. Taken together, APPL2 is a critical therapeutic target for antidepressant treatment.

Keywords: APPL2, hippocampal neurogenesis, depressive behaviors and olfactory dysfunction, stress

Procedia PDF Downloads 57
2182 Spectral Linewidth Measurement of Linear Frequency Modulated Continuous Wave Laser with Short Delay within the Coherence Length

Authors: Jongpil La, Jieun Choi

Abstract:

Optical frequency modulation technology for FMCW LiDAR based on Optical Phase Locked Loop(OPLL) configuration is addressed in this paper. The spectral linewidth measurement method of the linear frequency-modulated laser is also described. The single-frequency laser with narrow spectral linewidth is generated using an external cavity diode laser and the excitation frequency of the laser is adjusted by controlling the injection current of the laser. If the injection current of the laser is increased, the lasing frequency is decreased because of the slight increase in the refractive index of the laser gain chip. Dynamic optical frequency change rate is measured by using a Mach-Zehnder interferometer and compared with a proper reference signal. The phase difference between the reference signal and the measured signal using the Mach-Zehnder interferometer is obtained by mixing those two signals. The phase error is used to detect the frequency deviation error from the target value, which is then fed back to the driving current of the laser to compensate for it. The frequency sweep error from the ideal linear frequency waveform will degrade the spectral linewidth of the target spectrum and will degrade the maximum range performance of FMCW LiDAR. Therefore, the spectral linewidth measurement of frequency modulated laser is very important to evaluate the performance of the LiDAR system. However, it is impossible to apply the conventional self-homodyne or self-heterodyne method with a long delay line to evaluate the spectral linewidth of the frequency-modulated laser because the beat frequency generated by the long delay line is too high to measure with a high bandwidth frequency modulated laser. In this article, the spectral linewidth of the frequency-modulated laser is measured by using the newly proposed self-heterodyne method with a short delay line. The theoretical derivation for the proposed linewidth measurement method is provided in this article. The laser's spectral modulation bandwidth and linewidth are measured as 2.91GHz and 287kHz, respectively. LiDAR.

Keywords: FMCW, LiDAR, spectral linewidth, self-heterodyne

Procedia PDF Downloads 15
2181 An Interdisciplinary Approach to Investigating Style: A Case Study of a Chinese Translation of Gilbert’s (2006) Eat Pray Love

Authors: Elaine Y. L. Ng

Abstract:

Elizabeth Gilbert’s (2006) biography Eat, Pray, Love describes her travels to Italy, India, and Indonesia after a painful divorce. The author’s experiences with love, loss, search for happiness, and meaning have resonated with a huge readership. As regards the translation of Gilbert’s (2006) Eat, Pray, Love into Chinese, it was first translated by a Taiwanese translator He Pei-Hua and published in Taiwan in 2007 by Make Boluo Wenhua Chubanshe with the fairly catching title “Enjoy! Traveling Alone.” The same translation was translocated to China, republished in simplified Chinese characters by Shanxi Shifan Daxue Chubanshe in 2008 and renamed in China, entitled “To Be a Girl for the Whole Life.” Later on, the same translation in simplified Chinese characters was reprinted by Hunan Wenyi Chubanshe in 2013. This study employs Munday’s (2002) systemic model for descriptive translation studies to investigate the translation of Gilbert’s (2006) Eat, Pray, Love into Chinese by the Taiwanese translator Hu Pei-Hua. It employs an interdisciplinary approach, combining systemic functional linguistics and corpus stylistics with sociohistorical research within a descriptive framework to study the translator’s discursive presence in the text. The research consists of three phases. The first phase is to locate the target text within its socio-cultural context. The target-text context concerning the para-texts, readers’ responses, and the publishers’ orientation will be explored. The second phase is to compare the source text and the target text for the categorization of translation shifts by using the methodological tools of systemic functional linguistics and corpus stylistics. The investigation concerns the rendering of mental clauses and speech and thought presentation. The final phase is an explanation of the causes of translation shifts. The linguistic findings are related to the extra-textual information collected in an effort to ascertain the motivations behind the translator’s choices. There exist sets of possible factors that may have contributed to shaping the textual features of the given translation within a specific socio-cultural context. The study finds that the translator generally reproduces the mental clauses and speech and thought presentation closely according to the original. Nevertheless, the language of the translation has been widely criticized to be unidiomatic and stiff, losing the elegance of the original. In addition, the several Chinese translations of the given text produced by one Taiwanese and two Chinese publishers are basically the same. They are repackaged slightly differently, mainly with the change of the book cover and its captions for each version. By relating the textual findings to the extra-textual data of the study, it is argued that the popularity of the Chinese translation of Gilbert’s (2006) Eat, Pray, Love may not be attributed to the quality of the translation. Instead, it may have to do with the way the work is promoted strategically by the social media manipulated by the four e-bookstores promoting and selling the book online in China.

Keywords: chinese translation of eat pray love, corpus stylistics, motivations for translation shifts, systemic approach to translation studies

Procedia PDF Downloads 152
2180 Surprise Fraudsters Before They Surprise You: A South African Telecommunications Case Study

Authors: Ansoné Human, Nantes Kirsten, Tanja Verster, Willem D. Schutte

Abstract:

Every year the telecommunications industry suffers huge losses due to fraud. Mobile fraud, or generally, telecommunications fraud is the utilisation of telecommunication products or services to acquire money illegally from or failing to pay a telecommunication company. A South African telecommunication operator developed two internal fraud scorecards to mitigate future risks of application fraud events. The scorecards aim to predict the likelihood of an application being fraudulent and surprise fraudsters before they surprise the telecommunication operator by identifying fraud at the time of application. The scorecards are utilised in the vetting process to evaluate the applicant in terms of the fraud risk the applicant would present to the telecommunication operator. Telecommunication providers can utilise these scorecards to profile customers, as well as isolate fraudulent and/or high-risk applicants. We provide the complete methodology utilised in the development of the scorecards. Furthermore, a Determination and Discrimination (DD) ratio is provided in the methodology to select the most influential variables from a group of related variables. Throughout the development of these scorecards, the following was revealed regarding fraudulent cases and fraudster behaviour within the telecommunications industry: Fraudsters typically target high-value handsets. Furthermore, debit order dates scheduled for the end of the month have the highest fraud probability. The fraudsters target specific stores. Applicants who acquire an expensive package and receive a medium-income, as well as applicants who obtain an expensive package and receive a high income, have higher fraud percentages. If one month prior to application, the status of an account is already in arrears (two months or more), the applicant has a high probability of fraud. The applicants with the highest average spend on calls have a higher probability of fraud. If the amount collected changes from month to month, the likelihood of fraud is higher. Lastly, young and middle-aged applicants have an increased probability of being targeted by fraudsters than other ages.

Keywords: application fraud scorecard, predictive modeling, regression, telecommunications

Procedia PDF Downloads 96
2179 Customer Segmentation Revisited: The Case of the E-Tailing Industry in Emerging Market

Authors: Sanjeev Prasher, T. Sai Vijay, Chandan Parsad, Abhishek Banerjee, Sahakari Nikhil Krishna, Subham Chatterjee

Abstract:

With rapid rise in internet retailing, the industry is set for a major implosion. Due to the little difference among competitors, companies find it difficult to segment and target the right shoppers. The objective of the study is to segment Indian online shoppers on the basis of the factors – website characteristics and shopping values. Together, these cover extrinsic and intrinsic factors that affect shoppers as they visit web retailers. Data were collected using questionnaire from 319 Indian online shoppers, and factor analysis was used to confirm the factors influencing the shoppers in their selection of web portals. Thereafter, cluster analysis was applied, and different segments of shoppers were identified. The relationship between income groups and online shoppers’ segments was tracked using correspondence analysis. Significant findings from the study include that web entertainment and informativeness together contribute more than fifty percent of the total influence on the web shoppers. Contrary to general perception that shoppers seek utilitarian leverages, the present study highlights the preference for fun, excitement, and entertainment during browsing of the website. Four segments namely Information Seekers, Utility Seekers, Value Seekers and Core Shoppers were identified and profiled. Value seekers emerged to be the most dominant segment with two-fifth of the respondents falling for hedonic as well as utilitarian shopping values. With overlap among the segments, utilitarian shopping value garnered prominence with more than fifty-eight percent of the total respondents. Moreover, a strong relation has been established between the income levels and the segments of Indian online shoppers. Web shoppers show different motives from being utility seekers to information seekers, core shoppers and finally value seekers as income levels increase. Companies can strategically use this information for target marketing and align their web portals accordingly. This study can further be used to develop models revolving around satisfaction, trust and customer loyalty.

Keywords: online shopping, shopping values, effectiveness of information content, web informativeness, web entertainment, information seekers, utility seekers, value seekers, core shoppers

Procedia PDF Downloads 173
2178 New Experiences into Pancreatic Disease Science

Authors: Nadia Akbarpour

Abstract:

Pancreatic ductal adenocarcinoma is a forceful and obliterating illness, which is portrayed by intrusiveness, fast movement, and significant protection from treatment. Advances in neurotic arrangement and malignant growth hereditary qualities have worked on our illustrative comprehension of this infection; be that as it may, significant parts of pancreatic disease science remain ineffectively comprehended. A superior comprehension of pancreatic disease science should lead the way to more viable medicines. In the course of the most recent couple of years, there have been significant advances in the sub-atomic and organic comprehension of pancreatic malignancy. This included comprehension of the genomic intricacy of the illness, the job of pancreatic malignant growth undifferentiated organisms, the importance of the growth microenvironment, and the one-of-a-kind metabolic transformation of pancreas disease cells to acquire supplements under hypoxic climate. Endeavors have been made towards the advancement of the practical answer for its treatment with compelled achievement due to its complicated science. It is grounded that pancreatic malignancy undifferentiated cells (CSCs), yet present in a little count, contribute extraordinarily to PC inception, movement, and metastasis. Standard chemo and radiotherapeutic choices, notwithstanding, grow general endurance, the connected aftereffects are a huge concern. In the midst of the latest decade, our understanding with regards to atomic and cell pathways engaged with PC and the job of CSCs in its movement has expanded massively. By and by, the center is to target CSCs. The natural items have acquired a lot of thought as of late as they, generally, sharpen CSCs to chemotherapy and target atomic flagging engaged with different cancers, including PC. Some arranged investigations have demonstrated promising outcomes recommending that assessments in this course bring a ton to the table for the treatment of PC. Albeit preclinical investigations uncovered the significance of natural items in lessening pancreatic carcinoma, restricted examinations have been led to assess their part in centers. The current survey gives another knowledge to late advances in pancreatic malignancy science, treatment, and the current status of natural items in its expectation.

Keywords: pancreatic, genomic, organic, cancer

Procedia PDF Downloads 119
2177 An Exploratory Factor and Cluster Analysis of the Willingness to Pay for Last Mile Delivery

Authors: Maximilian Engelhardt, Stephan Seeck

Abstract:

The COVID-19 pandemic is accelerating the already growing field of e-commerce. The resulting urban freight transport volume leads to traffic and negative environmental impact. Furthermore, the service level of parcel logistics service provider is lacking far behind the expectations of consumer. These challenges can be solved by radically reorganize the urban last mile distribution structure: parcels could be consolidated in a micro hub within the inner city and delivered within time windows by cargo bike. This approach leads to a significant improvement of consumer satisfaction with their overall delivery experience. However, this approach also leads to significantly increased costs per parcel. While there is a relevant share of online shoppers that are willing to pay for such a delivery service there are no deeper insights about this target group available in the literature. Being aware of the importance of knowing target groups for businesses, the aim of this paper is to elaborate the most important factors that determine the willingness to pay for sustainable and service-oriented parcel delivery (factor analysis) and to derive customer segments (cluster analysis). In order to answer those questions, a data set is analyzed using quantitative methods of multivariate statistics. The data set was generated via an online survey in September and October 2020 within the five largest cities in Germany (n = 1.071). The data set contains socio-demographic, living-related and value-related variables, e.g. age, income, city, living situation and willingness to pay. In a prior work of the author, the data was analyzed applying descriptive and inference statistical methods that only provided limited insights regarding the above-mentioned research questions. The analysis in an exploratory way using factor and cluster analysis promise deeper insights of relevant influencing factors and segments for user behavior of the mentioned parcel delivery concept. The analysis model is built and implemented with help of the statistical software language R. The data analysis is currently performed and will be completed in December 2021. It is expected that the results will show the most relevant factors that are determining user behavior of sustainable and service-oriented parcel deliveries (e.g. age, current service experience, willingness to pay) and give deeper insights in characteristics that describe the segments that are more or less willing to pay for a better parcel delivery service. Based on the expected results, relevant implications and conclusions can be derived for startups that are about to change the way parcels are delivered: more customer-orientated by time window-delivery and parcel consolidation, more environmental-friendly by cargo bike. The results will give detailed insights regarding their target groups of parcel recipients. Further research can be conducted by exploring alternative revenue models (beyond the parcel recipient) that could compensate the additional costs, e.g. online-shops that increase their service-level or municipalities that reduce traffic on their streets.

Keywords: customer segmentation, e-commerce, last mile delivery, parcel service, urban logistics, willingness-to-pay

Procedia PDF Downloads 90
2176 [Keynote Talk]: Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method

Authors: Lina Wu, Jia Liu, Ye Li

Abstract:

The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.

Keywords: Bochner formula, Calculus Stokes' Theorem, Cauchy-Schwarz Inequality, first and second variation formulas, Liouville-type problem, p-harmonic map

Procedia PDF Downloads 249
2175 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 145
2174 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English

Authors: Duong Thuy Nguyen, Giulia Bencini

Abstract:

The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.

Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing

Procedia PDF Downloads 130
2173 Vehicle Timing Motion Detection Based on Multi-Dimensional Dynamic Detection Network

Authors: Jia Li, Xing Wei, Yuchen Hong, Yang Lu

Abstract:

Detecting vehicle behavior has always been the focus of intelligent transportation, but with the explosive growth of the number of vehicles and the complexity of the road environment, the vehicle behavior videos captured by traditional surveillance have been unable to satisfy the study of vehicle behavior. The traditional method of manually labeling vehicle behavior is too time-consuming and labor-intensive, but the existing object detection and tracking algorithms have poor practicability and low behavioral location detection rate. This paper proposes a vehicle behavior detection algorithm based on the dual-stream convolution network and the multi-dimensional video dynamic detection network. In the videos, the straight-line behavior of the vehicle will default to the background behavior. The Changing lanes, turning and turning around are set as target behaviors. The purpose of this model is to automatically mark the target behavior of the vehicle from the untrimmed videos. First, the target behavior proposals in the long video are extracted through the dual-stream convolution network. The model uses a dual-stream convolutional network to generate a one-dimensional action score waveform, and then extract segments with scores above a given threshold M into preliminary vehicle behavior proposals. Second, the preliminary proposals are pruned and identified using the multi-dimensional video dynamic detection network. Referring to the hierarchical reinforcement learning, the multi-dimensional network includes a Timer module and a Spacer module, where the Timer module mines time information in the video stream and the Spacer module extracts spatial information in the video frame. The Timer and Spacer module are implemented by Long Short-Term Memory (LSTM) and start from an all-zero hidden state. The Timer module uses the Transformer mechanism to extract timing information from the video stream and extract features by linear mapping and other methods. Finally, the model fuses time information and spatial information and obtains the location and category of the behavior through the softmax layer. This paper uses recall and precision to measure the performance of the model. Extensive experiments show that based on the dataset of this paper, the proposed model has obvious advantages compared with the existing state-of-the-art behavior detection algorithms. When the Time Intersection over Union (TIoU) threshold is 0.5, the Average-Precision (MP) reaches 36.3% (the MP of baselines is 21.5%). In summary, this paper proposes a vehicle behavior detection model based on multi-dimensional dynamic detection network. This paper introduces spatial information and temporal information to extract vehicle behaviors in long videos. Experiments show that the proposed algorithm is advanced and accurate in-vehicle timing behavior detection. In the future, the focus will be on simultaneously detecting the timing behavior of multiple vehicles in complex traffic scenes (such as a busy street) while ensuring accuracy.

Keywords: vehicle behavior detection, convolutional neural network, long short-term memory, deep learning

Procedia PDF Downloads 100
2172 Online Consortium of Independent Colleges and Universities (OCICU): Using Cluster Analysis to Grasp Student and Institutional Value of Consolidated Online Offerings in Higher Education

Authors: Alex Rodriguez, Adam Guerrero

Abstract:

Purpose: This study is designed to examine the institutions that comprise the Online Consortium of Independent Colleges and Universities (OCICU) to understand better the types of higher education institutions that comprise their membership. The literature on this topic is extensive in analyzing the current economic environment around higher education, which is largely considered to be negative for independent, tuition-driven institutions, and is forcing colleges and universities to reexamine how the college-attending population defines value and how institutions can best utilize their existing resources (and those of other institutions) to meet that value expectation. The results from this analysis are intended to give OCICU the ability to target their current customer base better, based on their most notable differences, and other institutions to see how to best approach consolidation within higher education. Design/Methodology: This study utilized k-means cluster analysis in order to explore the possibility that different segments exist within the seventy-one colleges and universities that have comprised OCICU. It analyzed fifty different variables, whose selection was based on the previous literature, collected by the Integrated Postsecondary Education Data System (IPEDS), whose data is self-reported by individual institutions. Findings: OCICU member institutions are partitioned into two clusters: "access institutions" and "conventional institutions” based largely on the student profile they target. Value: The methodology of the study is relatively unique as there are not many studies within the field of higher education marketing that have employed cluster analysis, and this type of analysis has never been conducted on OCICU members, specifically, or that of any higher education consolidated offering. OCICU can use the findings of this study to obtain a better grasp as to the specific needs of the two market segments OCICU currently serves and develop measurable marketing programs around how those segments are defined that communicate the value sought by current and potential OCICU members or those of similar institutions. Other consolidation efforts within higher education can also employ the same methodology to determine their own market segments.

Keywords: Consolidation, Colleges, Enrollment, Higher Education, Marketing, Strategy, Universities

Procedia PDF Downloads 112
2171 Cross Site Scripting (XSS) Attack and Automatic Detection Technology Research

Authors: Tao Feng, Wei-Wei Zhang, Chang-Ming Ding

Abstract:

Cross-site scripting (XSS) is one of the most popular WEB Attacking methods at present, and also one of the most risky web attacks. Because of the population of JavaScript, the scene of the cross site scripting attack is also gradually expanded. However, since the web application developers tend to only focus on functional testing and lack the awareness of the XSS, which has made the on-line web projects exist many XSS vulnerabilities. In this paper, different various techniques of XSS attack are analyzed, and a method automatically to detect it is proposed. It is easy to check the results of vulnerability detection when running it as a plug-in.

Keywords: XSS, no target attack platform, automatic detection,XSS detection

Procedia PDF Downloads 381
2170 Epigenetic Modifying Potential of Dietary Spices: Link to Cure Complex Diseases

Authors: Jeena Gupta

Abstract:

In the today’s world of pharmaceutical products, one should not forget the healing properties of inexpensive food materials especially spices. They are known to possess hidden pharmaceutical ingredients, imparting them the qualities of being anti-microbial, anti-oxidant, anti-inflammatory and anti-carcinogenic. Further aberrant epigenetic regulatory mechanisms like DNA methylation, histone modifications or altered microRNA expression patterns, which regulates gene expression without changing DNA sequence, contribute significantly in the development of various diseases. Changing lifestyles and diets exert their effect by influencing these epigenetic mechanisms which are thus the target of dietary phytochemicals. Bioactive components of plants have been in use since ages but their potential to reverse epigenetic alterations and prevention against diseases is yet to be explored. Spices being rich repositories of many bioactive constituents are responsible for providing them unique aroma and taste. Some spices like curcuma and garlic have been well evaluated for their epigenetic regulatory potential, but for others, it is largely unknown. We have evaluated the biological activity of phyto-active components of Fennel, Cardamom and Fenugreek by in silico molecular modeling, in vitro and in vivo studies. Ligand-based similarity studies were conducted to identify structurally similar compounds to understand their biological phenomenon. The database searching has been done by using Fenchone from fennel, Sabinene from cardamom and protodioscin from fenugreek as a query molecule in the different small molecule databases. Moreover, the results of the database searching exhibited that these compounds are having potential binding with the different targets found in the Protein Data Bank. Further in addition to being epigenetic modifiers, in vitro study had demonstrated the antimicrobial, antifungal, antioxidant and cytotoxicity protective effects of Fenchone, Sabinene and Protodioscin. To best of our knowledge, such type of studies facilitate the target fishing as well as making the roadmap in drug design and discovery process for identification of novel therapeutics.

Keywords: epigenetics, spices, phytochemicals, fenchone

Procedia PDF Downloads 130
2169 Statistical Characteristics of Code Formula for Design of Concrete Structures

Authors: Inyeol Paik, Ah-Ryang Kim

Abstract:

In this research, a statistical analysis is carried out to examine the statistical properties of the formula given in the design code for concrete structures. The design formulas of the Korea highway bridge design code - the limit state design method (KHBDC) which is the current national bridge design code and the design code for concrete structures by Korea Concrete Institute (KCI) are applied for the analysis. The safety levels provided by the strength formulas of the design codes are defined based on the probabilistic and statistical theory.KHBDC is a reliability-based design code. The load and resistance factors of this code were calibrated to attain the target reliability index. It is essential to define the statistical properties for the design formulas in this calibration process. In general, the statistical characteristics of a member strength are due to the following three factors. The first is due to the difference between the material strength of the actual construction and that used in the design calculation. The second is the difference between the actual dimensions of the constructed sections and those used in design calculation. The third is the difference between the strength of the actual member and the formula simplified for the design calculation. In this paper, the statistical study is focused on the third difference. The formulas for calculating the shear strength of concrete members are presented in different ways in KHBDC and KCI. In this study, the statistical properties of design formulas were obtained through comparison with the database which comprises the experimental results from the reference publications. The test specimen was either reinforced with the shear stirrup or not. For an applied database, the bias factor was about 1.12 and the coefficient of variation was about 0.18. By applying the statistical properties of the design formula to the reliability analysis, it is shown that the resistance factors of the current design codes satisfy the target reliability indexes of both codes. Also, the minimum resistance factors of the KHBDC which is written in the material resistance factor format and KCE which is in the member resistance format are obtained and the results are presented. A further research is underway to calibrate the resistance factors of the high strength and high-performance concrete design guide.

Keywords: concrete design code, reliability analysis, resistance factor, shear strength, statistical property

Procedia PDF Downloads 295
2168 150 KVA Multifunction Laboratory Test Unit Based on Power-Frequency Converter

Authors: Bartosz Kedra, Robert Malkowski

Abstract:

This paper provides description and presentation of laboratory test unit built basing on 150 kVA power frequency converter and Simulink RealTime platform. Assumptions, based on criteria which load and generator types may be simulated using discussed device, are presented, as well as control algorithm structure. As laboratory setup contains transformer with thyristor controlled tap changer, a wider scope of setup capabilities is presented. Information about used communication interface, data maintenance, and storage solution as well as used Simulink real-time features is presented. List and description of all measurements are provided. Potential of laboratory setup modifications is evaluated. For purposes of Rapid Control Prototyping, a dedicated environment was used Simulink RealTime. Therefore, load model Functional Unit Controller is based on a PC computer with I/O cards and Simulink RealTime software. Simulink RealTime was used to create real-time applications directly from Simulink models. In the next step, applications were loaded on a target computer connected to physical devices that provided opportunity to perform Hardware in the Loop (HIL) tests, as well as the mentioned Rapid Control Prototyping process. With Simulink RealTime, Simulink models were extended with I/O cards driver blocks that made automatic generation of real-time applications and performing interactive or automated runs on a dedicated target computer equipped with a real-time kernel, multicore CPU, and I/O cards possible. Results of performed laboratory tests are presented. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule.

Keywords: MATLAB, power converter, Simulink Real-Time, thyristor-controlled tap changer

Procedia PDF Downloads 298
2167 A Study of Secondary Particle Production from Carbon Ion Beam for Radiotherapy

Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane

Abstract:

Achieving precise radiotherapy through carbon therapy necessitates the accurate monitoring of radiation dose distribution within the patient's body. This process is pivotal for targeted tumor treatment, minimizing harm to healthy tissues, and enhancing overall treatment effectiveness while reducing the risk of side effects. In our investigation, we adopted a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo (MC) simulations. Initially, Geant4 simulations were employed to extract the initial positions of secondary particles generated during interactions between carbon ions and water, including protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we explored the relationship between the carbon ion beam and these secondary particles. Interaction vertex imaging (IVI) proves valuable for monitoring dose distribution during carbon therapy, providing information about secondary particle locations and abundances, particularly protons. The IVI method relies on charged particles produced during ion fragmentation to gather range information by reconstructing particle trajectories back to their point of origin, known as the vertex. In the context of carbon ion therapy, our simulation results indicated a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the unique elongated geometry of the target, hindering the straightforward transmission of forward-generated protons. Consequently, the limited protons that did emerge predominantly originated from points close to the target entrance. Fragment (protons) trajectories were approximated as straight lines, and a beam back-projection algorithm, utilizing interaction positions recorded in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.

Keywords: radiotherapy, carbon therapy, monitor secondary proton doses, interaction vertex imaging

Procedia PDF Downloads 51