Search results for: distributed model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18009

Search results for: distributed model

8529 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 89
8528 Teachers’ Protective Factors of Resilience Scale: Factorial Structure, Validity and Reliability Issues

Authors: Athena Daniilidou, Maria Platsidou

Abstract:

Recently developed scales addressed -specifically- teachers’ resilience. Although they profited from the field, they do not include some of the critical protective factors of teachers’ resilience identified in the literature. To address this limitation, we aimed at designing a more comprehensive scale for measuring teachers' resilience which encompasses various personal and environmental protective factors. To this end, two studies were carried out. In Study 1, 407 primary school teachers were tested with the new scale, the Teachers’ Protective Factors of Resilience Scale (TPFRS). Similar scales, such as the Multidimensional Teachers’ Resilience Scale and the Teachers’ Resilience Scale), were used to test the convergent validity, while the Maslach Burnout Inventory and the Teachers’ Sense of Efficacy Scale was used to assess the discriminant validity of the new scale. The factorial structure of the TPFRS was checked with confirmatory factor analysis and a good fit of the model to the data was found. Next, item response theory analysis using a two-parameter model (2PL) was applied to check the items within each factor. It revealed that 9 items did not fit the corresponding factors well and they were removed. The final version of the TPFRS includes 29 items, which assess six protective factors of teachers’ resilience: values and beliefs (5 items, α=.88), emotional and behavioral adequacy (6 items, α=.74), physical well-being (3 items, α=.68), relationships within the school environment, (6 items, α=.73) relationships outside the school environment (5 items, α=.84), and the legislative framework of education (4 items, α=.83). Results show that it presents a satisfactory convergent and discriminant validity. Study 2, in which 964 primary and secondary school teachers were tested, confirmed the factorial structure of the TPFRS as well as its discriminant validity, which was tested with the Schutte Emotional Intelligence Scale-Short Form. In conclusion, our results confirmed that the TPFRS is a valid instrument for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession. In conclusion, our results showed that the TPFRS is a new multi-dimensional instrument valid for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession.

Keywords: resilience, protective factors, teachers, item response theory

Procedia PDF Downloads 74
8527 Generating Ideas to Improve Road Intersections Using Design with Intent Approach

Authors: Omar Faruqe Hamim, M. Shamsul Hoque, Rich C. McIlroy, Katherine L. Plant, Neville A. Stanton

Abstract:

Road safety has become an alarming issue, especially in low-middle income developing countries. The traditional approaches lack the out of the box thinking, making engineers confined to applying usual techniques in making roads safer. A socio-technical approach has recently been introduced in improving road intersections through designing with intent. This Design With Intent (DWI) approach aims to give practitioners a more nuanced approach to design and behavior, working with people, people’s understanding, and the complexities of everyday human experience. It's a collection of design patterns —and a design and research approach— for exploring the interactions between design and people’s behavior across products, services, and environments, both digital and physical. Through this approach, it can be seen that how designing with people in behavior change can be applied to social and environmental problems, as well as commercially. It has a total of 101 cards across eight different lenses, such as architectural, error-proofing, interaction, ludic, perceptual, cognitive, Machiavellian, and security lens each having its own distinct characteristics of extracting ideas from the participant of this approach. For this research purpose, a three-legged accident blackspot intersection of a national highway has been chosen to perform the DWI workshop. Participants from varying fields such as civil engineering, naval architecture and marine engineering, urban and regional planning, and sociology actively participated for a day long workshop. While going through the workshops, the participants were given a preamble of the accident scenario and a brief overview of DWI approach. Design cards of varying lenses were distributed among 10 participants and given an hour and a half for brainstorming and generating ideas to improve the safety of the selected intersection. After the brainstorming session, the participants spontaneously went through roundtable discussions regarding the ideas they have come up with. According to consensus of the forum, ideas were accepted or rejected. These generated ideas were then synthesized and agglomerated to bring about an improvement scheme for the intersection selected in our study. To summarize the improvement ideas from DWI approach, color coding of traffic lanes for separate vehicles, channelizing the existing bare intersection, providing advance warning traffic signs, cautionary signs and educational signs motivating road users to drive safe, using textured surfaces at approach with rumble strips before the approach of intersection were the most significant one. The motive of this approach is to bring about new ideas from the road users and not just depend on traditional schemes to increase the efficiency, safety of roads as well and to ensure the compliance of road users since these features are being generated from the minds of users themselves.

Keywords: design with intent, road safety, human experience, behavior

Procedia PDF Downloads 126
8526 Simulation of Channel Models for Device-to-Device Application of 5G Urban Microcell Scenario

Authors: H. Zormati, J. Chebil, J. Bel Hadj Tahar

Abstract:

Next generation wireless transmission technology (5G) is expected to support the development of channel models for higher frequency bands, so clarification of high frequency bands is the most important issue in radio propagation research for 5G, multiple urban microcellular measurements have been carried out at 60 GHz. In this paper, the collected data is uniformly analyzed with focus on the path loss (PL), the objective is to compare simulation results of some studied channel models with the purpose of testing the performance of each one.

Keywords: 5G, channel model, 60GHz channel, millimeter-wave, urban microcell

Procedia PDF Downloads 296
8525 Smartphone-Based Human Activity Recognition by Machine Learning Methods

Authors: Yanting Cao, Kazumitsu Nawata

Abstract:

As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.

Keywords: smart sensors, human activity recognition, artificial intelligence, SVM

Procedia PDF Downloads 132
8524 Thulium Laser Design and Experimental Verification for NIR and MIR Nonlinear Applications in Specialty Optical Fibers

Authors: Matej Komanec, Tomas Nemecek, Dmytro Suslov, Petr Chvojka, Stanislav Zvanovec

Abstract:

Nonlinear phenomena in the near- and mid-infrared region are attracting scientific attention mainly due to the supercontinuum generation possibilities and subsequent utilizations for ultra-wideband applications like e.g. absorption spectroscopy or optical coherence tomography. Thulium-based fiber lasers provide access to high-power ultrashort pump pulses in the vicinity of 2000 nm, which can be easily exploited for various nonlinear applications. The paper presents a simulation and experimental study of a pulsed thulium laser based for near-infrared (NIR) and mid-infrared (MIR) nonlinear applications in specialty optical fibers. In the first part of the paper the thulium laser is discussed. The thulium laser is based on a gain-switched seed-laser and a series of amplification stages for obtaining output peak powers in the order of kilowatts for pulses shorter than 200 ps in full-width at half-maximum. The pulsed thulium laser is first studied in a simulation software, focusing on seed-laser properties. Afterward, a pre-amplification thulium-based stage is discussed, with the focus of low-noise signal amplification, high signal gain and eliminating pulse distortions during pulse propagation in the gain medium. Following the pre-amplification stage a second gain stage is evaluated with incorporating a thulium-fiber of shorter length with increased rare-earth dopant ratio. Last a power-booster stage is analyzed, where the peak power of kilowatts should be achieved. Examples of analytical study are further validated by the experimental campaign. The simulation model is further corrected based on real components – parameters such as real insertion-losses, cross-talks, polarization dependencies, etc. are included. The second part of the paper evaluates the utilization of nonlinear phenomena, their specific features at the vicinity of 2000 nm, compared to e.g. 1550 nm, and presents supercontinuum modelling, based on the thulium laser pulsed output. Supercontinuum generation simulation is performed and provides reasonably accurate results, once fiber dispersion profile is precisely defined and fiber nonlinearity is known, furthermore input pulse shape and peak power must be known, which is assured thanks to the experimental measurement of the studied thulium pulsed laser. The supercontinuum simulation model is put in relation to designed and characterized specialty optical fibers, which are discussed in the third part of the paper. The focus is placed on silica and mainly on non-silica fibers (fluoride, chalcogenide, lead-silicate) in their conventional, microstructured or tapered variants. Parameters such as dispersion profile and nonlinearity of exploited fibers were characterized either with an accurate model, developed in COMSOL software or by direct experimental measurement to achieve even higher precision. The paper then combines all three studied topics and presents a possible application of such a thulium pulsed laser system working with specialty optical fibers.

Keywords: nonlinear phenomena, specialty optical fibers, supercontinuum generation, thulium laser

Procedia PDF Downloads 302
8523 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method

Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David

Abstract:

Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.

Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height

Procedia PDF Downloads 190
8522 Developing Manufacturing Process for the Graphene Sensors

Authors: Abdullah Faqihi, John Hedley

Abstract:

Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.

Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy

Procedia PDF Downloads 102
8521 The MoEDAL-MAPP* Experiment - Expanding the Discovery Horizon of the Large Hadron Collider

Authors: James Pinfold

Abstract:

The MoEDAL (Monopole and Exotics Detector at the LHC) experiment deployed at IP8 on the Large Hadron Collider ring was the first dedicated search experiment to take data at the Large Hadron Collider (LHC) in 2010. It was designed to search for Highly Ionizing Particle (HIP) avatars of new physics such as magnetic monopoles, dyons, Q-balls, multiply charged particles, massive, slowly moving charged particles and long-lived massive charge SUSY particles. We shall report on our search at LHC’s Run-2 for Magnetic monopoles and dyons produced in p-p and photon-fusion. In more detail, we will report our most recent result in this arena: the search for magnetic monopoles via the Schwinger Mechanism in Pb-Pb collisions. The MoEDAL detector, originally the first dedicated search detector at the LHC, is being reinstalled for LHC’s Run-3 to continue the search for electrically and magnetically charged HIPs with enhanced instantaneous luminosity, detector efficiency and a factor of ten lower thresholds for HIPs. As part of this effort, we will search for massive l long-lived, singly and multiply charged particles from various scenarios for which MoEDAL has a competitive sensitivity. An upgrade to MoEDAL, the MoEDAL Apparatus for Penetrating Particles (MAPP), is now the LHC’s newest detector. The MAPP detector, positioned in UA83, expands the physics reach of MoEDAL to include sensitivity to feebly-charged particles with charge, or effective charge, as low as 10-3 e (where e is the electron charge). Also, In conjunction with MoEDAL’s trapping detector, the MAPP detector gives us a unique sensitivity to extremely long-lived charged particles. MAPP also has some sensitivity to long-lived neutral particles. The addition of an Outrigger detector for MAPP-1 to increase its acceptance for more massive milli-charged particles is currently in the Technical Proposal stage. Additionally, we will briefly report on the plans for the MAPP-2 upgrade to the MoEDAL-MAPP experiment for the High Luminosity LHC (HL-LHC). This experiment phase is designed to maximize MoEDAL-MAPP’s sensitivity to very long-lived neutral messengers of physics beyond the Standard Model. We envisage this detector being deployed in the UGC1 gallery near IP8.

Keywords: LHC, beyond the standard model, dedicated search experiment, highly ionizing particles, long-lived particles, milli-charged particles

Procedia PDF Downloads 55
8520 Numerical Study on the Static Characteristics of Novel Aerostatic Thrust Bearings Possessing Elastomer Capillary Restrictor and Bearing Surface

Authors: S. W. Lo, S.-H. Lu, Y. H. Guo, L. C. Hsu

Abstract:

In this paper, a novel design of aerostatic thrust bearing is proposed and is analyzed numerically. The capillary restrictor and bearing disk are made of elastomer like silicone and PU. The viscoelasticity of elastomer helps the capillary expand for more air flux and at the same time, allows conicity of the bearing surface to form when the air pressure is enhanced. Therefore, the bearing has the better ability of passive compensation. In the present example, as compared with the typical model, the new designs can nearly double the load capability and offer four times static stiffness.

Keywords: aerostatic, bearing, elastomer, static stiffness

Procedia PDF Downloads 361
8519 Applying a SWOT Analysis to Inform the Educational Provision of Learners with Autism Spectrum Disorders

Authors: Claire Sciberras

Abstract:

Introduction: Autism Spectrum Disorder (ASD) has become recognized as being the most common childhood neurological condition. Indeed, numerous studies demonstrate an increase in the prevalence rate of children diagnosed with ASD. Concurrent with these findings, the European Agency for Special Needs and Inclusive Education reported a similar escalating tendency in prevalence also in Malta. Such an increase within the educational context in Malta has led the European Agency to call for increased support within educational settings in Malta. However, although research has addressed the positive impact of mainstream education on learners with ASD, empirical studies vis-à-vis the internal and external strengths and weaknesses present within the support provided in mainstream settings in Malta is distinctly limited. In light of the aforementioned argument, Malta would benefit from research which focuses on analysing the strengths, weaknesses, opportunities, and threats (SWOTs) which are present within the support provision of learners with ASD in mainstream primary schools. Such SWOT analysis is crucial as lack of appropriate opportunities might jeopardize the educational and social experiences of persons with ASD throughout their schooling. Methodology: A mixed methodological approach would be well suited to examine the provision of support of learners with ASD as the combination of qualitative and quantitative approaches allows researchers to collect a comprehensive range of data and validate their results. Hence, it is intended that questionnaires will be distributed to all the stakeholders involved so as to acquire a broader perspective to be collected from a wider group who provide support to students with ASD across schools in Malta. Moreover, the use of a qualitative approach in the form of interviews with a sample group will be implemented. Such an approach will be considered as it would potentially allow the researcher to gather an in-depth perspective vis-à-vis to the nature of the services which are currently provided to learners with ASD. The intentions of the study: Through the analysis of the data collected vis-à-vis to the SWOTs within the provision of support of learners with ASD it is intended that; i) a description in regards to the educational provision for learners with ASD within mainstream primary schools in Malta in light of the experiences and perceptions of the stakeholders involved will be acquired; ii) an analysis of the SWOTs which exist within the services for learners with ASD in primary state schools in Malta is carried out and iii) based on the SWOT analysis, recommendations that can lead to improvements in practice in the field of ASD in Malta and beyond will be provided. Conclusion: Due to the heterogeneity of individuals with ASD which spans across several deficits related to the social communication and interaction domain and also across areas linked to restricted, repetitive behavioural patterns, educational settings need to alter their standards according to the needs of their students. Thus, the standards established by schools throughout prior phases do not remain applicable forever, and therefore these need to be reviewed periodically in accordance with the diversities and the necessities of their learners.

Keywords: autism spectrum disorders, mainstream educational settings, provision of support, SWOT analysis

Procedia PDF Downloads 168
8518 The Meaning of Happiness and Unhappiness among Female Teenagers in Urban Finland: A Social Representations Approach

Authors: Jennifer De Paola

Abstract:

Objectives: The literature is saturated with figures and hard data on happiness and its rates, causes and effects at a large scale, whereas very little is known about the way specific groups of people within societies understand and talk about happiness in their everyday life. The present study contributes to fill this gap in the happiness research by analyzing social representations of happiness among young women through the theoretical frame provided by Moscovici’s Social Representation Theory. Methods: Participants were (N= 351) female students (16-18 year olds) from Finnish, Swedish and English speaking high schools in the Helsinki region, Finland. Main source of data collection were word associations using the stimulus word ‘happiness’ and word associations using as stimulus the term that in the participants’ opinion represents the opposite of happiness. The allowed number of associations was five per stimulus word (10 associations per participant). In total, the 351 participants produced 6973 associations with the two stimulus words given: 3500 (50,19%) associations with ‘happiness’ and 3473 (49,81%) associations with ‘opposite of happiness’. The associations produced were analyzed qualitatively to identify associations with similar meaning and then coded combining similar associations in larger categories. Results: In total, 33 categories were identified respectively for the stimulus word ‘happiness’ and for the stimulus word ‘opposite of happiness’. In general terms, the 33 categories identified for ‘happiness’ included associations regarding relationships with key people considered important, such as ‘family’, abstract concepts such as meaningful life, success and moral values as well as more mundane and hedonic elements like food, pleasure and fun. Similarly, the 33 categories emerged for ‘opposite of happiness’ included relationship problems and arguments, negative feelings such as sadness, depression, stress as well as more concrete issues such as financial problems. Participants were also asked to rate their own level of happiness on a scale from 1 to 10. Results indicated the mean of the self-rated level of happiness was 7,93 (the range varied from 1 to 10; SD = 1, 50). Participants’ responses were further divided into three different groups according to the self-rated level of happiness: group 1 (level 10-9), group 2 (level 8-6), and group 3 (level 5 and lower) in order to investigate the way the categories mentioned above were distributed among the different groups. Preliminary results show that the category ‘family’ is associated with higher level of happiness, whereas its presence gradually decreases among the participants with a lower level of happiness. Moreover, the category ‘depression’ seems to be mainly present among participants in group 3, whereas the category ‘sadness’ is mainly present among participants with higher level of happiness. Conclusion: In conclusion, this study indicates the prevalent ways of thinking about happiness and its opposite among young female students, suggesting that representations varied to some extent depending on the happiness level of the participants. This study contributes to bringing new knowledge as it considers happiness as a holistic state, thus going beyond the literature that so far has too often viewed happiness as a mere unidimensional spectrum.

Keywords: female, happiness, social representations, unhappiness

Procedia PDF Downloads 208
8517 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform

Authors: Khadija Refouh

Abstract:

Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.

Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms

Procedia PDF Downloads 128
8516 Simulation of Reflectometry in Alborz Tokamak

Authors: S. Kohestani, R. Amrollahi, P. Daryabor

Abstract:

Microwave diagnostics such as reflectometry are receiving growing attention in magnetic confinement fusionresearch. In order to obtain the better understanding of plasma confinement physics, more detailed measurements on density profile and its fluctuations might be required. A 2D full-wave simulation of ordinary mode propagation has been written in an effort to model effects seen in reflectometry experiment. The code uses the finite-difference-time-domain method with a perfectly-matched-layer absorption boundary to solve Maxwell’s equations.The code has been used to simulate the reflectometer measurement in Alborz Tokamak.

Keywords: reflectometry, simulation, ordinary mode, tokamak

Procedia PDF Downloads 408
8515 Microbial Effects of Iron Elution from Hematite into Seawater Mediated via Dissolved Organic Matter

Authors: Apichaya Aneksampant, Xuefei Tu, Masami Fukushima, Mitsuo Yamamoto

Abstract:

The restoration of seaweed beds recovery has been developed using a fertilization technique for supplying dissolved iron to barren coastal areas. The fertilizer is composed of iron oxides as a source of iron and compost as humic substance (HS) source, which can serve as chelator of iron to stabilize the dissolved species under oxic seawater condition. However, elution mechanisms of iron from iron oxide surfaces have not sufficiently elucidated. In particular, roles of microbial activities in the elution of iron from the fertilizer are not sufficiently understood. In the present study, a fertilizer (iron oxide/compost = 1/1, v/v) was incubated in a water tank at Mashike coast, Hokkaido Japan. Microorganisms in the 6-month fertilizer were isolated and identified as Exiguobacterium oxidotolerans sp. (T-2-2). The identified bacteria were inoculated to perform iron elution test in a postgate B medium, prepared in artificial seawater. Hematite was used as a model iron oxide and anthraquinone-2,7-disolfonate (AQDS) as a model for HSs. The elution test performed in presence and absence of bacteria inoculation. ICP-AES was used to analyze total iron and a colorimetric technique using ferrozine employed for the determination of ferrous ion. During the incubation period, sample contained hematite and T-2-2 in both presence and absence of AQDS continuously showed the iron elution and reached at the highest concentration after 9 days of incubation and then slightly decrease to stabilize within 20 days. Comparison to the sample without T-2-2, trace amount of iron was observed, suggesting that iron elution to seawater can be attributed to bacterial activities. The levels of total organic carbon (TOC) in the culture solution with hematite decreased. This may be to the adsorption of organic compound, AQDS, to hematite surfaces. The decrease in UV-vis absorption of AQDS in the culture solution also support the results of TOC that AQDS was adsorbed to hematite surfaces. AQDS can enhance the iron elution, while the adsorption of organic matter suppresses the iron elution from hematite.

Keywords: anthraquinone-2, 7-disolfonate, barren ground, E.oxidotolerans sp., hematite, humic substances, iron elution

Procedia PDF Downloads 365
8514 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic

Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx

Abstract:

Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.

Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM

Procedia PDF Downloads 192
8513 Analysis of Aspergillus fumigatus IgG Serologic Cut-Off Values to Increase Diagnostic Specificity of Allergic Bronchopulmonary Aspergillosis

Authors: Sushmita Roy Chowdhury, Steve Holding, Sujoy Khan

Abstract:

The immunogenic responses of the lung towards the fungus Aspergillus fumigatus may range from invasive aspergillosis in the immunocompromised, fungal ball or infection within a cavity in the lung in those with structural lung lesions, or allergic bronchopulmonary aspergillosis (ABPA). Patients with asthma or cystic fibrosis are particularly predisposed to ABPA. There are consensus guidelines that have established criteria for diagnosis of ABPA, but uncertainty remains on the serologic cut-off values that would increase the diagnostic specificity of ABPA. We retrospectively analyzed 80 patients with severe asthma and evidence of peripheral blood eosinophilia ( > 500) over the last 3 years who underwent all serologic tests to exclude ABPA. Total IgE, specific IgE and specific IgG levels against Aspergillus fumigatus were measured using ImmunoCAP Phadia-100 (Thermo Fisher Scientific, Sweden). The Modified ISHAM working group 2013 criteria (obligate criteria: asthma or cystic fibrosis, total IgE > 1000 IU/ml or > 417 kU/L and positive specific IgE Aspergillus fumigatus or skin test positivity; with ≥ 2 of peripheral eosinophilia, positive specific IgG Aspergillus fumigatus and consistent radiographic opacities) was used in the clinical workup for the final diagnosis of ABPA. Patients were divided into 3 groups - definite, possible, and no evidence of ABPA. Specific IgG Aspergillus fumigatus levels were not used to assign the patients into any of the groups. Of 80 patients (males 48, females 32; mean age 53.9 years ± SD 15.8) selected for the analysis, there were 30 patients who had positive specific IgE against Aspergillus fumigatus (37.5%). 13 patients fulfilled the Modified ISHAM working group 2013 criteria of ABPA (‘definite’), while 15 patients were ‘possible’ ABPA and 52 did not fulfill the criteria (not ABPA). As IgE levels were not normally distributed, median levels were used in the analysis. Median total IgE levels of patients with definite and possible ABPA were 2144 kU/L and 2597 kU/L respectively (non-significant), while median specific IgE Aspergillus fumigatus at 4.35 kUA/L and 1.47 kUA/L respectively were significantly different (comparison of standard deviations F-statistic 3.2267, significance level p=0.040). Mean levels of IgG anti-Aspergillus fumigatus in the three groups (definite, possible and no evidence of ABPA) were compared using ANOVA (Statgraphics Centurion Professional XV, Statpoint Inc). Mean levels of IgG anti-Aspergillus fumigatus (Gm3) in definite ABPA was 125.17 mgA/L ( ± SD 54.84, with 95%CI 92.03-158.32), while mean Gm3 levels in possible and no ABPA were 18.61 mgA/L and 30.05 mgA/L respectively. ANOVA showed a significant difference between the definite group and the other groups (p < 0.001). This was confirmed using multiple range tests (Fisher's least significant difference procedure). There was no significant difference between the possible ABPA and not ABPA groups (p > 0.05). The study showed that a sizeable proportion of patients with asthma are sensitized to Aspergillus fumigatus in this part of India. A higher cut-off value of Gm3 ≥ 80 mgA/L provides a higher serologic specificity towards definite ABPA. Long-term studies would provide us more information if those patients with 'possible' APBA and positive Gm3 later develop clear ABPA, and are different from the Gm3 negative group in this respect. Serologic testing with clear defined cut-offs are a valuable adjunct in the diagnosis of ABPA.

Keywords: allergic bronchopulmonary aspergillosis, Aspergillus fumigatus, asthma, IgE level

Procedia PDF Downloads 191
8512 Genetic Diversity of Termite (Isoptera) Fauna of Western Ghats of India

Authors: A. S. Vidyashree, C. M. Kalleshwaraswamy, R. Asokan, H. M. Mahadevaswamy

Abstract:

Termites are very vital ecological thespians in tropical ecosystem, having been designated as “ecosystem engineers”, due to their significant role in providing soil ecosystem services. Despite their importance, our understanding of a number of their basic biological processes in termites is extremely limited. Developing a better understanding of termite biology is closely dependent upon consistent species identification. At present, identification of termites is relied on soldier castes. But for many species, soldier caste is not reported, that creates confusion in identification. The use of molecular markers may be helpful in estimating phylogenetic relatedness between the termite species and estimating genetic differentiation among local populations within each species. To understand this, termites samples were collected from various places of Western Ghats covering four states namely Karnataka, Kerala, Tamil Nadu, Maharashtra during 2013-15. Termite samples were identified based on their morphological characteristics, molecular characteristics, or both. Survey on the termite fauna in Karnataka, Kerala, Maharashtra and Tamil Nadu indicated the presence of a 16 species belongs to 4 subfamilies under two families viz., Rhinotermitidae and Termitidae. Termititidae was the dominant family which was belonging to 4 genera and four subfamilies viz., Macrotermitinae, Amitermitinae, Nasutitermitinae and Termitinae. Amitermitinae had three species namely, Microcerotermes fletcheri, M. pakistanicus and Speculitermes sinhalensis. Macrotermitinae had the highest number of species belonging two genera, namely Microtermes and Odontotermes. Microtermes genus was with only one species i.e., Microtermes obesi. The genus Odontotermes was represented by the highest number of species (07), namely, O. obesus was the dominant (41 per cent) and the most widely distributed species in Karnataka, Karala, Maharashtra and Tamil nadu followed by O. feae (19 per cent), O.assmuthi (11 per cent) and others like O. bellahunisensis O. horni O. redemanni, O. yadevi. Nasutitermitinae was represented by two genera namely Nasutitermes anamalaiensis and Trinervitermes biformis. Termitinae subfamily was represented by Labiocapritermes distortus. Rhinotermitidae was represented by single subfamily Heterotermetinae. In Heterotermetinae, two species namely Heterotermes balwanthi and H. malabaricus were recorded. Genetic relationship among termites collected from various locations of Western Ghats of India was characterized based on mitochondrial DNA sequences (12S, 16S, and COII). Sequence analysis and divergence among the species was assessed. These results suggest that the use of both molecular and morphological approaches is crucial in ensuring accurate species identification. Efforts were made to understand their evolution and to address the ambiguities in morphological taxonomy. The implication of the study in revising the taxonomy of Indian termites, their characterization and molecular comparisons between the sequences are discussed.

Keywords: isoptera, mitochondrial DNA sequences, rhinotermitidae, termitidae, Western ghats

Procedia PDF Downloads 254
8511 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System

Authors: Nicolas M. Beleski, Gustavo A. G. Lugo

Abstract:

Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.

Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind

Procedia PDF Downloads 119
8510 Isolation and Transplantation of Hepatocytes in an Experimental Model

Authors: Inas Raafat, Azza El Bassiouny, Waldemar L. Olszewsky, Nagui E. Mikhail, Mona Nossier, Nora E. I. El-Bassiouni, Mona Zoheiry, Houda Abou Taleb, Noha Abd El-Aal, Ali Baioumy, Shimaa Attia

Abstract:

Background: Orthotopic liver transplantation is an established treatment for patients with severe acute and end-stage chronic liver disease. The shortage of donor organs continues to be the rate-limiting factor for liver transplantation throughout the world. Hepatocyte transplantation is a promising treatment for several liver diseases and can, also, be used as a "bridge" to liver transplantation in cases of liver failure. Aim of the work: This study was designed to develop a highly efficient protocol for isolation and transplantation of hepatocytes in experimental Lewis rat model to provide satisfactory guidelines for future application on humans.Materials and Methods: Hepatocytes were isolated from the liver by double perfusion technique and bone marrow cells were isolated by centrifugation of shafts of tibia and femur of donor Lewis rats. Recipient rats were subjected to sub-lethal dose of irradiation 2 days before transplantation. In a laparotomy operation the spleen was injected by freshly isolated hepatocytes and bone marrow cells were injected intravenously. The animals were sacrificed 45 day latter and splenic sections were prepared and stained with H & E, PAS AFP and Prox1. Results: The data obtained from this study showed that the double perfusion technique is successful in separation of hepatocytes regarding cell number and viability. Also the method used for bone marrow cells separation gave excellent results regarding cell number and viability. Intrasplenic engraftment of hepatocytes and live tissue formation within the splenic tissue were found in 70% of cases. Hematoxylin and eosin stained splenic sections from 7 rats showed sheets and clusters of cells among the splenic tissues. Periodic Acid Schiff stained splenic sections from 7 rats showed clusters of hepatocytes with intensely stained pink cytoplasmic granules denoting the presence of glycogen. Splenic sections from 7 rats stained with anti-α-fetoprotein antibody showed brownish cytoplasmic staining of the hepatocytes denoting positive expression of AFP. Splenic sections from 7 rats stained with anti-Prox1 showed brownish nuclear staining of the hepatocytes denoting positive expression of Prox1 gene on these cells. Also, positive expression of Prox1 gene was detected on lymphocytes aggregations in the spleens. Conclusions: Isolation of liver cells by double perfusion technique using collagenase buffer is a reliable method that has a very satisfactory yield regarding cell number and viability. The intrasplenic route of transplantation of the freshly isolated liver cells in an immunocompromised model was found to give good results regarding cell engraftment and tissue formation. Further studies are needed to assess function of engrafted hepatocytes by measuring prothrombin time, serum albumin and bilirubin levels.

Keywords: Lewis rats, hepatocytes, BMCs, transplantation, AFP, Prox1

Procedia PDF Downloads 299
8509 Using Linear Logistic Regression to Evaluation the Patient and System Delay and Effective Factors in Mortality of Patients with Acute Myocardial Infarction

Authors: Firouz Amani, Adalat Hoseinian, Sajjad Hakimian

Abstract:

Background: The mortality due to Myocardial Infarction (MI) is often occur during the first hours after onset of symptom. So, for taking the necessary treatment and decreasing the mortality rate, timely visited of the hospital could be effective in this regard. The aim of this study was to investigate the impact of effective factors in mortality of MI patients by using Linear Logistic Regression. Materials and Methods: In this case-control study, all patients with Acute MI who referred to the Ardabil city hospital were studied. All of died patients were considered as the case group (n=27) and we select 27 matched patients without Acute MI as a control group. Data collected for all patients in two groups by a same checklist and then analyzed by SPSS version 24 software using statistical methods. We used the linear logistic regression model to determine the effective factors on mortality of MI patients. Results: The mean age of patients in case group was significantly higher than control group (75.1±11.7 vs. 63.1±11.6, p=0.001).The history of non-cardinal diseases in case group with 44.4% significantly higher than control group with 7.4% (p=0.002).The number of performed PCIs in case group with 40.7% significantly lower than control group with 74.1% (P=0.013). The time distance between hospital admission and performed PCI in case group with 110.9 min was significantly upper than control group with 56 min (P=0.001). The mean of delay time from Onset of symptom to hospital admission (patient delay) and the mean of delay time from hospital admissions to receive treatment (system delay) was similar between two groups. By using logistic regression model we revealed that history of non-cardinal diseases (OR=283) and the number of performed PCIs (OR=24.5) had significant impact on mortality of MI patients in compare to other factors. Conclusion: Results of this study showed that of all studied factors, the number of performed PCIs, history of non-cardinal illness and the interval between onset of symptoms and performed PCI have significant relation with morality of MI patients and other factors were not meaningful. So, doing more studies with a large sample and investigated other involved factors such as smoking, weather and etc. is recommended in future.

Keywords: acute MI, mortality, heart failure, arrhythmia

Procedia PDF Downloads 113
8508 Determination of Cyanotoxins from Leeukraal and Klipvoor Dams

Authors: Moletsane Makgotso, Mogakabe Elijah, Marrengane Zinhle

Abstract:

South Africa’s water resources quality is becoming more and more weakened by eutrophication, which deteriorates its usability. Thirty five percent of fresh water resources are eutrophic to hypertrophic, including grossly-enriched reservoirs that go beyond the globally-accepted definition of hypertrophy. Failing infrastructure adds to the problem of contaminated urban runoff which encompasses an important fraction of flows to inland reservoirs, particularly in the non-coastal, economic heartland of the country. Eutrophication threatens the provision of potable and irrigation water in the country because of the dependence on fresh water resources. Eutrophicated water reservoirs increase water treatment costs, leads to unsuitability for recreational purposes and health risks to human and animal livelihood due to algal proliferation. Eutrophication is caused by high concentrations of phosphorus and nitrogen in water bodies. In South Africa, Microsystis and Anabaena are widely distributed cyanobacteria, with Microcystis being the most dominant bloom-forming cyanobacterial species associated with toxin production. Two impoundments were selected, namely the Klipvoor and Leeukraal dams as they are mainly used for fishing, recreational, agricultural and to some extent, potable water purposes. The total oxidized nitrogen and total phosphorus concentration were determined as causative nutrients for eutrophication. Chlorophyll a and total microcystins, as well as the identification of cyanobacteria was conducted as indicators of cyanobacterial infestation. The orthophosphate concentration was determined by subjecting the samples to digestion and filtration followed by spectrophotometric analysis of total phosphates and dissolved phosphates using Aquakem kits. The total oxidized nitrates analysis was conducted by initially conducting filtration followed by spectrophotometric analysis. Chlorophyll a was quantified spectrophotometrically by measuring the absorbance of before and after acidification. Microcystins were detected using the Quantiplate Microcystin Kit, as well as microscopic identification of cyanobacterial species. The Klipvoor dam was found to be hypertrophic throughout the study period as the mean Chlorophyll a concentration was 269.4µg/l which exceeds the mean value for the hypertrophic state. The mean Total Phosphorus concentration was >0.130mg/l, and the total microcystin concentration was > 2.5µg/l throughout the study. The most predominant algal species were found to be the Microcystis. The Leeukraal dam was found to be mesotrophic with the potential of it becoming eutrophic as the mean concentration for chlorophyll a was 18.49 µg/l with the mean Total Phosphorus > 0.130mg/l and the Total Microcystin concentration < 0.16µg/l. The cyanobacterial species identified in Leeukraal have been classified as those that do not pose a potential risk to any impoundment. Microcystis was present throughout the sampling period and dominant during the warmer seasons. The high nutrient concentrations led to the dominance of Microcystis that resulted in high levels of microcystins rendering the impoundments, particularly Klipvoor undesirable for utilisation.

Keywords: nitrogen, phosphorus, cyanobacteria, microcystins

Procedia PDF Downloads 265
8507 Predicting College Students’ Happiness During COVID-19 Pandemic; Be optimistic and Well in College!

Authors: Michiko Iwasaki, Jane M. Endres, Julia Y. Richards, Andrew Futterman

Abstract:

The present study aimed to examine college students’ happiness during COVID19-pandemic. Using the online survey data from 96 college students in the U.S., a regression analysis was conducted to predict college students’ happiness. The results indicated that a four-predictor model (optimism, college students’ subjective wellbeing, coronavirus stress, and spirituality) explained 57.9% of the variance in student’s subjective happiness, F(4,77)=26.428, p<.001, R2=.579, 95% CI [.41,.66]. The study suggests the importance of learned optimism among college students.

Keywords: COVID-19, optimism, spirituality, well-being

Procedia PDF Downloads 212
8506 A Qualitative Study Exploring Factors Influencing the Uptake of and Engagement with Health and Wellbeing Smartphone Apps

Authors: D. Szinay, O. Perski, A. Jones, T. Chadborn, J. Brown, F. Naughton

Abstract:

Background: The uptake of health and wellbeing smartphone apps is largely influenced by popularity indicators (e.g., rankings), rather than evidence-based content. Rapid disengagement is common. This study aims to explore how and why potential users 1) select and 2) engage with such apps, and 3) how increased engagement could be promoted. Methods: Semi-structured interviews and a think-aloud approach were used to allow participants to verbalise their thoughts whilst searching for a health or wellbeing app online, followed by a guided search in the UK National Health Service (NHS) 'Apps Library' and Public Health England’s (PHE) 'One You' website. Recruitment took place between June and August 2019. Adults interested in using an app for behaviour change were recruited through social media. Data were analysed using the framework approach. The analysis is both inductive and deductive, with the coding framework being informed by the Theoretical Domains Framework. The results are further mapped onto the COM-B (Capability, Opportunity, Motivation - Behaviour) model. The study protocol is registered on the Open Science Framework (https://osf.io/jrkd3/). Results: The following targets were identified as playing a key role in increasing the uptake of and engagement with health and wellbeing apps: 1) psychological capability (e.g., reduced cognitive load); 2) physical opportunity (e.g., low financial cost); 3) social opportunity (e.g., embedded social media); 4) automatic motivation (e.g., positive feedback). Participants believed that the promotion of evidence-based apps on NHS-related websites could be enhanced through active promotion on social media, adverts on the internet, and in general practitioner practices. Future Implications: These results can inform the development of interventions aiming to promote the uptake of and engagement with evidence-based health and wellbeing apps, a priority within the UK NHS Long Term Plan ('digital first'). The targets identified across the COM-B domains could help organisations that provide platforms for such apps to increase impact through better selection of apps.

Keywords: behaviour change, COM-B model, digital health, mhealth

Procedia PDF Downloads 143
8505 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term

Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu

Abstract:

In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.

Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records

Procedia PDF Downloads 205
8504 Competitiveness of Animation Industry: The Case of Thailand

Authors: T. Niracharapa

Abstract:

The research studied and examined the competitiveness of the animation industry in Thailand. Data were collected based on articles, related reports and websites, news, research, and interviews of key persons from both public and private sectors. The diamond model was used to analyze the study. The major factor driving the Thai animation industry forward includes a quality workforce, their creativity and strong associations. However, discontinuity in government support, infrastructure, marketing, IP creation and financial constraints were factors keeping the Thai animation industry less competitive in the global market.

Keywords: animation, competitiveness, government, Thailand, market

Procedia PDF Downloads 415
8503 Automatic Vowel and Consonant's Target Formant Frequency Detection

Authors: Othmane Bouferroum, Malika Boudraa

Abstract:

In this study, a dual exponential model for CV formant transition is derived from locus theory of speech perception. Then, an algorithm for automatic vowel and consonant’s target formant frequency detection is developed and tested on real speech. The results show that vowels and consonants are detected through transitions rather than their small stable portions. Also, vowel reduction is clearly observed in our data. These results are confirmed by the observations made in perceptual experiments in the literature.

Keywords: acoustic invariance, coarticulation, formant transition, locus equation

Procedia PDF Downloads 254
8502 School Students’ Career Guidance in the Context of Inclusive Education in Kazakhstan: Experience and Perspectives

Authors: Laura Butabayeva, Svetlana Ismagulova, Gulbarshin Nogaibayeva, Maiya Temirbayeva, Aidana Zhussip

Abstract:

The article presents the main results of the study conducted within the grant project «Organizational and methodological foundations for ensuring the inclusiveness of school students’ career guidance» (2022-2024). The main aim of the project is to study the issue of the absence of developed mechanisms, coordinating the activities of all stakeholders in preparing school students for conscious career choice, taking into account their individual opportunities and special educational needs. To achieve the aim of the project, according to the implementation plan, the analysis of foreign and national literature on the studied problem, as well as the study of the state of school students’ career guidance and their socialization in the context of inclusive education were conducted, the international experience on this issue was explored. The analysis of the national literature conducted by the authors has shown the State’s annual increase in the number of students with special educational needs as well as the rapid demand of labour market, influencing their professional self-determination in modern society. The participants from 5 State’s regions, including students, their parents, general secondary schools administration and educators, as well as employers, took part in the study, taking into account the geographical location: south, north, west, centre, and the cities of republican significance. To ensure the validity of the study’s results, the triangulation method was utilised, including both qualitative and quantitative methods. The data were analysed independently and compared with each other. Ethical principles were considered during all stages of the study. The characteristics of the system of career guidance in the modern school, the role and the involvement of stakeholders in the system of career guidance, the opinions of educators on school students’ preparedness for career choice, and the factors impeding the effectiveness of career guidance in schools were examined. The problem of stakeholders’ disunity and inconsistency, causing the systemic labor market distortions, the growth of low-skilled labor, and the unemployed, including people with special educational needs, were revealed. The other issue identified by the researchers was educators’ insufficient readiness for students’ career choice preparation in the context of inclusive education. To study cutting-edge experience in organizing a system of career guidance for young people and develop mechanisms coordinating the actions of all stakeholders in preparing students for career choice, the institutions of career guidance in France, Japan, and Germany were explored by the researchers. To achieve the aim of the project, the systemic contemporary model of school students’ professional self-determination, considering their individual opportunities and special educational needs, has been developed based on the study results and international experience. The main principles of this model are consistency, accessibility, inclusiveness, openness, coherence, continuity. The perspectives of students’ career guidance development in the context of inclusive education have been suggested.

Keywords: career guidance, inclusive education, model of school students’ professional self-determination, psychological and pedagogical support, special educational needs

Procedia PDF Downloads 25
8501 Co-Creation of an Entrepreneurship Living Learning Community: A Case Study of Interprofessional Collaboration

Authors: Palak Sadhwani, Susie Pryor

Abstract:

This paper investigates interprofessional collaboration (IPC) in the context of entrepreneurship education. Collaboration has been found to enhance problem solving, leverage expertise, improve resource allocation, and create organizational efficiencies. However, research suggests that successful collaboration is hampered by individual and organizational characteristics. IPC occurs when two or more professionals work together to solve a problem or achieve a common objective. The necessity for this form of collaboration is particularly prevalent in cross-disciplinary fields. In this study, we utilize social exchange theory (SET) to examine IPC in the context of an entrepreneurship living learning community (LLC) at a large university in the Western United States. Specifically, we explore these research questions: How are rules or norms established that govern the collaboration process? How are resources valued and distributed? How are relationships developed and managed among and between parties? LLCs are defined as groups of students who live together in on-campus housing and share similar academic or special interests. In 2007, the Association of American Colleges and Universities named living communities a high impact practice (HIP) because of their capacity to enhance and give coherence to undergraduate education. The entrepreneurship LLC in this study was designed to offer first year college students the opportunity to live and learn with like-minded students from diverse backgrounds. While the university offers other LLC environments, the target residents for this LLC are less easily identified and are less apparently homogenous than residents of other LLCs on campus (e.g., Black Scholars, LatinX, Women in Science and Education), creating unique challenges. The LLC is a collaboration between the university’s College of Business & Public Administration and the Department of Housing and Residential Education (DHRE). Both parties are contributing staff, technology, living and learning spaces, and other student resources. This paper reports the results an ethnographic case study which chronicles the start-up challenges associated with the co-creation of the LLC. SET provides a general framework for examining how resources are valued and exchanged. In this study, SET offers insights into the processes through which parties negotiate tensions resulting from approaching this shared project from very different perspectives and cultures in a novel project environment. These tensions occur due to a variety of factors, including team formation and management, allocation of resources, and differing output expectations. The results are useful to both scholars and practitioners of entrepreneurship education and organizational management. They suggest probably points of conflict and potential paths towards reconciliation.

Keywords: case study, ethnography, interprofessional collaboration, social exchange theory

Procedia PDF Downloads 127
8500 Assessment of Mortgage Applications Using Fuzzy Logic

Authors: Swathi Sampath, V. Kalaichelvi

Abstract:

The assessment of the risk posed by a borrower to a lender is one of the common problems that financial institutions have to deal with. Consumers vying for a mortgage are generally compared to each other by the use of a number called the Credit Score, which is generated by applying a mathematical algorithm to information in the applicant’s credit report. The higher the credit score, the lower the risk posed by the candidate, and the better he is to be taken on by the lender. The objective of the present work is to use fuzzy logic and linguistic rules to create a model that generates Credit Scores.

Keywords: credit scoring, fuzzy logic, mortgage, risk assessment

Procedia PDF Downloads 390