Search results for: hand length
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6151

Search results for: hand length

901 An Evaluation of the Relationship between the Anthropometric Measurements and Blood Lipid Profiles in Adolescents

Authors: Nalan Hakime Nogay

Abstract:

Childhood obesity is a significant health issue that is currently on the rise all over the world. In recent years, the relationship between childhood obesity and cardiovascular disease risk has been pointed out. The purpose of this study is to evaluate the relationship between some of the anthropometric indicators and blood lipid levels in adolescents. The present study has been conducted on a total of 252 adolescents -200 girls and 52 boys- within an age group of 12 to 18 years. Blood was drawn from each participant in the morning -after having fasted for 10 hours from the day before- to analyze their total cholesterol, HDL, LDL and triglyceride levels. Their body weight, height, waist circumference, subscapular skinfold thicknesses and triceps skinfold thicknesses measurements were taken and their individual waist/height ratios, BMI and body fat ratios were calculated. The blood lipid levels of the participants were categorized as acceptable, borderline and high in accordance with the 2011 Expert Panel Integrated Guidelines. The body fat ratios, total blood cholesterol and HDL levels of the girls were significantly higher than the boys whereas their waist circumference values were lower. The triglyceride levels, total cholesterol/HDL, LDL/HDL, triglyceride/HDL ratios of the group with the BMI ≥ 95 percentile ratio (the obese group) were higher than the groups that were considered to be overweight and normal weight as per their respective BMI values, while the HDL level of the obese group was lower; a fact that was found to be statistically significant. No significant relationship could be established, however, between the total blood cholesterol and LDL levels with their anthropometric measurements. The BMI, waist circumference, waist/height ratio, body fat ratio and triglyceride level of the group with the higher triglyceride level ( ≥ 130mg/dl) were found to be significantly higher compared to borderline (90-129 mg/dl) and the normal group (< 90 mg/dl). The BMI, waist circumference, waist/height ratio values of the group with the lower HDL level ( < 40 mg/dl) were significantly higher than the normal ( > 45 mg/dl) and borderline (40-45 mg/dl) groups. All of the anthropometric measurements of the group with the higher triglyceride/HDL ratio ( ≥ 3) were found to be significantly higher than that of the group with the lower ratio (< 3). Having a high BMI, waist/height ratio and waist circumference is related to low HDL and high blood triglyceride and triglyceride/HDL ratio. A high body fat ratio, on the other hand, is associated with a low HDL and high triglyceride/HDL ratio. Tackling childhood and adolescent obesity are important in terms of preventing cardiovascular diseases.

Keywords: adolescent, body fat, body mass index, lipid profile

Procedia PDF Downloads 248
900 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 132
899 Critical Understanding on Equity and Access in Higher Education Engaging with Adult Learners and International Student in the Context of Globalisation

Authors: Jin-Hee Kim

Abstract:

The way that globalization distinguishes itself from the previous changes is scope and intensity of changes, which together affect many parts of a nation’s system. In this way, globalization has its relation with the concept of ‘internationalization’ in that a nation state formulates a set of strategies in many areas of its governance to actively react to it. In short, globalization is a ‘catalyst,’ and internationalization is a ‘response’. In this regard, the field of higher education is one of the representative cases that globalization has several consequences that change the terrain of national policy-making. Started and been dominated mainly by the Western world, it has now been expanded to the ‘late movers,’ such as Asia-Pacific countries. The case of internationalization of Korean higher education is, therefore, located in a unique place in this arena. Yet Korea still is one of the major countries of sending its students to the so-called, ‘first world.’ On the other hand, it has started its effort to recruit international students from the world to its higher education system. After new Millennium, particularly, internationalization of higher education has been launched in its full-scale and gradually been one of the important global policy agenda, striving in both ways by opening its turf to foreign educational service providers and recruiting prospective students from other countries. Particularly the latter, recruiting international students, has been highlighted under the government project named ‘Study Korea,’ launched in 2004. Not only global, but also local issues and motivations were based to launch this nationwide project. Bringing international students means various desirable economic outcomes such as reducing educational deficit as well as utilizing them in Korean industry after the completion of their study, to name a few. In addition, in a similar vein, Korea's higher education institutes have started to have a new comers of adult learners. When it comes to the questions regarding the quality and access of this new learning agency, the answer is quite tricky. This study will investigate the different dimension of education provision and learning process to empower diverse group regardless of nationality, race, class and gender in Korea. Listening to the voices of international students and adult learning as non-traditional participants in a changing Korean higher educational space not only benefit students themselves, but Korean stakeholders who should try to accommodate more comprehensive and fair educational provisions for more and more diversifying groups of learners.

Keywords: education equity, access, globalisation, international students, adult learning, learning support

Procedia PDF Downloads 195
898 Solar Electric Propulsion: The Future of Deep Space Exploration

Authors: Abhishek Sharma, Arnab Banerjee

Abstract:

The research is intended to study the solar electric propulsion (SEP) technology for planetary missions. The main benefits of using solar electric propulsion for such missions are shorter flight times, more frequent target accessibility and the use of a smaller launch vehicle than that required by a comparable chemical propulsion mission. Energized by electric power from on-board solar arrays, the electrically propelled system uses 10 times less propellant than conventional chemical propulsion system, yet the reduced fuel mass can provide vigorous power which is capable of propelling robotic and crewed missions beyond the Lower Earth Orbit (LEO). The various thrusters used in the SEP are gridded ion thrusters and the Hall Effect thrusters. The research is solely aimed to study the ion thrusters and investigate the complications related to it and what can be done to overcome the glitches. The ion thrusters are used because they are found to have a total lower propellant requirement and have substantially longer time. In the ion thrusters, the anode pushes or directs the incoming electrons from the cathode. But the anode is not maintained at a very high potential which leads to divergence. Divergence leads to the charges interacting against the surface of the thruster. Just as the charges ionize the xenon gases, they are capable of ionizing the surfaces and over time destroy the surface and hence contaminate it. Hence the lifetime of thruster gets limited. So a solution to this problem is using substances which are not easy to ionize as the surface material. Another approach can be to increase the potential of anode so that the electrons don’t deviate much or reduce the length of thruster such that the positive anode is more effective. The aim is to work on these aspects as to how constriction of the deviation of charges can be done by keeping the input power constant and hence increase the lifetime of the thruster. Predominantly ring cusp magnets are used in the ion thrusters. However, the study is also intended to observe the effect of using solenoid for producing micro-solenoidal magnetic field apart from using the ring cusp magnetic field which are used in the discharge chamber for prevention of interaction of electrons with the ionization walls. Another foremost area of interest is what are the ways by which power can be provided to the Solar Electric Propulsion Vehicle for lowering and boosting the orbit of the spacecraft and also provide substantial amount of power to the solenoid for producing stronger magnetic fields. This can be successfully achieved by using the concept of Electro-dynamic tether which will serve as a power source for powering both the vehicle and the solenoids in the ion thruster and hence eliminating the need for carrying extra propellant on the spacecraft which will reduce the weight and hence reduce the cost of space propulsion.

Keywords: electro-dynamic tether, ion thruster, lifetime of thruster, solar electric propulsion vehicle

Procedia PDF Downloads 196
897 Perspectives and Challenges a Functional Bread With Yeast Extract to Improve Human Diet

Authors: Cláudia Patrocínio, Beatriz Fernandes, Ana Filipa Pires

Abstract:

Background: Mirror therapy (MT) is used to improve motor function after stroke. During MT, a mirror is placed between the two upper limbs (UL), thus reflecting movements of the non- affected side as if it were the affected side. Objectives: The aim of this review is to analyze the evidence on the effec.tiveness of MT in the recovery of UL function in population with post chronic stroke. Methods: The literature search was carried out in PubMed, ISI Web of Science, and PEDro database. Inclusion criteria: a) studies that include individuals diagnosed with stroke for at least 6 months; b) intervention with MT in UL or comparing it with other interventions; c) articles published until 2023; d) articles published in English or Portuguese; e) randomized controlled studies. Exclusion criteria: a) animal studies; b) studies that do not provide a detailed description of the intervention; c) Studies using central electrical stimulation. The methodological quality of the included studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. Studies with < 4 on PEDro scale were excluded. Eighteen studies met all the inclusion criteria. Main results and conclusions: The quality of the studies varies between 5 and 8. One article compared muscular strength training (MST) with MT vs without MT and four articles compared the use of MT vs conventional therapy (CT), one study compared extracorporeal shock therapy (EST) with and without MT and another study compared functional electrical stimulation (FES), MT and biofeedback, three studies compared MT with Mesh Glove (MG) or Sham Therapy, five articles compared performing bimanual exercises with and without MT and three studies compared MT with virtual reality (VR) or robot training (RT). The assessment of changes in function and structure (International Classification of Functioning, Disability and Health parameter) was carried out, in each article, mainly using the Fugl Meyer Assessment-Upper Limb scale, activity and participation (International Classification of Functioning, Disability and Health parameter) were evaluated using different scales, in each study. The positive results were seen in these parameters, globally. Results suggest that MT is more effective than other therapies in motor recovery and function of the affected UL, than these techniques alone, although the results have been modest in most of the included studies. There is also a more significant improvement in the distal movements of the affected hand than in the rest of the UL.

Keywords: physical therapy, mirror therapy, chronic stroke, upper limb, hemiplegia

Procedia PDF Downloads 35
896 Simultaneous Measurement of Wave Pressure and Wind Speed with the Specific Instrument and the Unit of Measurement Description

Authors: Branimir Jurun, Elza Jurun

Abstract:

The focus of this paper is the description of an instrument called 'Quattuor 45' and defining of wave pressure measurement. Special attention is given to measurement of wave pressure created by the wind speed increasing obtained with the instrument 'Quattuor 45' in the investigated area. The study begins with respect to theoretical attitudes and numerous up to date investigations related to the waves approaching the coast. The detailed schematic view of the instrument is enriched with pictures from ground plan and side view. Horizontal stability of the instrument is achieved by mooring which relies on two concrete blocks. Vertical wave peak monitoring is ensured by one float above the instrument. The synthesis of horizontal stability and vertical wave peak monitoring allows to create a representative database for wave pressure measuring. Instrument ‘Quattuor 45' is named according to the way the database is received. Namely, the electronic part of the instrument consists of the main chip ‘Arduino', its memory, four load cells with the appropriate modules and the wind speed sensor 'Anemometers'. The 'Arduino' chip is programmed to store two data from each load cell and two data from the anemometer on SD card each second. The next part of the research is dedicated to data processing. All measured results are stored automatically in the database and after that detailed processing is carried out in the MS Excel. The result of the wave pressure measurement is synthesized by the unit of measurement kN/m². This paper also suggests a graphical presentation of the results by multi-line graph. The wave pressure is presented on the left vertical axis, while the wind speed is shown on the right vertical axis. The time of measurement is displayed on the horizontal axis. The paper proposes an algorithm for wind speed measurements showing the results for two characteristic winds in the Adriatic Sea, called 'Bura' and 'Jugo'. The first of them is the northern wind that reaches high speeds, causing low and extremely steep waves, where the pressure of the wave is relatively weak. On the other hand, the southern wind 'Jugo' has a lower speed than the northern wind, but due to its constant duration and constant speed maintenance, it causes extremely long and high waves that cause extremely high wave pressure.

Keywords: instrument, measuring unit, waves pressure metering, wind seed measurement

Procedia PDF Downloads 183
895 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 85
894 Estimating Groundwater Seepage Rates: Case Study at Zegveld, Netherlands

Authors: Wondmyibza Tsegaye Bayou, Johannes C. Nonner, Joost Heijkers

Abstract:

This study aimed to identify and estimate dynamic groundwater seepage rates using four comparative methods; the Darcian approach, the water balance approach, the tracer method, and modeling. The theoretical background to these methods is put together in this study. The methodology was applied to a case study area at Zegveld following the advice of the Water Board Stichtse Rijnlanden. Data collection has been from various offices and a field campaign in the winter of 2008/09. In this complex confining layer of the study area, the location of the phreatic groundwater table is at a shallow depth compared to the piezometric water level. Data were available for the model years 1989 to 2000 and winter 2008/09. The higher groundwater table shows predominately-downward seepage in the study area. Results of the study indicated that net recharge to the groundwater table (precipitation excess) and the ditch system are the principal sources for seepage across the complex confining layer. Especially in the summer season, the contribution from the ditches is significant. Water is supplied from River Meije through a pumping system to meet the ditches' water demand. The groundwater seepage rate was distributed unevenly throughout the study area at the nature reserve averaging 0.60 mm/day for the model years 1989 to 2000 and 0.70 mm/day for winter 2008/09. Due to data restrictions, the seepage rates were mainly determined based on the Darcian method. Furthermore, the water balance approach and the tracer methods are applied to compute the flow exchange within the ditch system. The site had various validated groundwater levels and vertical flow resistance data sources. The phreatic groundwater level map compared with TNO-DINO groundwater level data values overestimated the groundwater level depth by 28 cm. The hydraulic resistance values obtained based on the 3D geological map compared with the TNO-DINO data agreed with the model values before calibration. On the other hand, the calibrated model significantly underestimated the downward seepage in the area compared with the field-based computations following the Darcian approach.

Keywords: groundwater seepage, phreatic water table, piezometric water level, nature reserve, Zegveld, The Netherlands

Procedia PDF Downloads 67
893 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 202
892 Finite Element Modeling and Analysis of Reinforced Concrete Coupled Shear Walls Strengthened with Externally Bonded Carbon Fiber Reinforced Polymer Composites

Authors: Sara Honarparast, Omar Chaallal

Abstract:

Reinforced concrete (RC) coupled shear walls (CSWs) are very effective structural systems in resisting lateral loads due to winds and earthquakes and are particularly used in medium- to high-rise RC buildings. However, most of existing old RC structures were designed for gravity loads or lateral loads well below the loads specified in the current modern seismic international codes. These structures may behave in non-ductile manner due to poorly designed joints, insufficient shear reinforcement and inadequate anchorage length of the reinforcing bars. This has been the main impetus to investigate an appropriate strengthening method to address or attenuate the deficiencies of these structures. The objective of this paper is to twofold: (i) evaluate the seismic performance of existing reinforced concrete coupled shear walls under reversed cyclic loading; and (ii) investigate the seismic performance of RC CSWs strengthened with externally bonded (EB) carbon fiber reinforced polymer (CFRP) sheets. To this end, two CSWs were considered as follows: (a) the first one is representative of old CSWs and therefore was designed according to the 1941 National Building Code of Canada (NBCC, 1941) with conventionally reinforced coupling beams; and (b) the second one, representative of new CSWs, was designed according to modern NBCC 2015 and CSA/A23.3 2014 requirements with diagonally reinforced coupling beam. Both CSWs were simulated using ANSYS software. Nonlinear behavior of concrete is modeled using multilinear isotropic hardening through a multilinear stress strain curve. The elastic-perfectly plastic stress-strain curve is used to simulate the steel material. Bond stress–slip is modeled between concrete and steel reinforcement in conventional coupling beam rather than considering perfect bond to better represent the slip of the steel bars observed in the coupling beams of these CSWs. The old-designed CSW was strengthened using CFRP sheets bonded to the concrete substrate and the interface was modeled using an adhesive layer. The behavior of CFRP material is considered linear elastic up to failure. After simulating the loading and boundary conditions, the specimens are analyzed under reversed cyclic loading. The comparison of results obtained for the two unstrengthened CSWs and the one retrofitted with EB CFRP sheets reveals that the strengthening method improves the seismic performance in terms of strength, ductility, and energy dissipation capacity.

Keywords: carbon fiber reinforced polymer, coupled shear wall, coupling beam, finite element analysis, modern code, old code, strengthening

Procedia PDF Downloads 180
891 Improving the Detection of Depression in Sri Lanka: Cross-Sectional Study Evaluating the Efficacy of a 2-Question Screen for Depression

Authors: Prasad Urvashi, Wynn Yezarni, Williams Shehan, Ravindran Arun

Abstract:

Introduction: Primary health services are often the first point of contact that patients with mental illness have with the healthcare system. A number of tools have been developed to increase detection of depression in the context of primary care. However, one challenge amongst many includes utilizing these tools within the limited primary care consultation timeframe. Therefore, short questionnaires that screen for depression that are just as effective as more comprehensive diagnostic tools may be beneficial in improving detection rates of patients visiting a primary care setting. Objective: To develop and determine the sensitivity and specificity of a 2-Question Questionnaire (2-QQ) to screen for depression in in a suburban primary care clinic in Ragama, Sri Lanka. The purpose is to develop a short screening tool for depression that is culturally adapted in order to increase the detection of depression in the Sri Lankan patient population. Methods: This was a cross-sectional study involving two steps. Step one: verbal administration of 2-QQ to patients by their primary care physician. Step two: completion of the Peradeniya Depression Scale, a validated diagnostic tool for depression, the patient after their consultation with the primary care physician. The results from the PDS were then correlated to the results from the 2-QQ for each patient to determine sensitivity and specificity of the 2-QQ. Results: A score of 1/+ on the 2-QQ was most sensitive but least specific. Thus, setting the threshold at this level is effective for correctly identifying depressed patients, but also inaccurately captures patients who are not depressed. A score of 6 on the 2-QQ was most specific but least sensitive. Setting the threshold at this level is effective for correctly identifying patients without depression, but not very effective at capturing patients with depression. Discussion: In the context of primary care, it may be worthwhile setting the 2-QQ screen at a lower threshold for positivity (such as a score of 1 or above). This would generate a high test sensitivity and thus capture the majority of patients that have depression. On the other hand, by setting a low threshold for positivity, patients who do not have depression but score higher than 1 on the 2-QQ will also be falsely identified as testing positive for depression. However, the benefits of identifying patients who present with depression may outweigh the harms of falsely identifying a non-depressed patient. It is our hope that the 2-QQ will serve as a quick primary screen for depression in the primary care setting and serve as a catalyst to identify and treat individuals with depression.

Keywords: depression, primary care, screening tool, Sri Lanka

Procedia PDF Downloads 233
890 Water Dumpflood into Multiple Low-Pressure Gas Reservoirs

Authors: S. Lertsakulpasuk, S. Athichanagorn

Abstract:

As depletion-drive gas reservoirs are abandoned when there is insufficient production rate due to pressure depletion, waterflooding has been proposed to increase the reservoir pressure in order to prolong gas production. Due to high cost, water injection may not be economically feasible. Water dumpflood into gas reservoirs is a new promising approach to increase gas recovery by maintaining reservoir pressure with much cheaper costs than conventional waterflooding. Thus, a simulation study of water dumpflood into multiple nearly abandoned or already abandoned thin-bedded gas reservoirs commonly found in the Gulf of Thailand was conducted to demonstrate the advantage of the proposed method and to determine the most suitable operational parameters for reservoirs having different system parameters. A reservoir simulation model consisting of several thin-layered depletion-drive gas reservoirs and an overlying aquifer was constructed in order to investigate the performance of the proposed method. Two producers were initially used to produce gas from the reservoirs. One of them was later converted to a dumpflood well after gas production rate started to decline due to continuous reduction in reservoir pressure. The dumpflood well was used to flow water from the aquifer to increase pressure of the gas reservoir in order to drive gas towards producer. Two main operational parameters which are wellhead pressure of producer and the time to start water dumpflood were investigated to optimize gas recovery for various systems having different gas reservoir dip angles, well spacings, aquifer sizes, and aquifer depths. This simulation study found that water dumpflood can increase gas recovery up to 12% of OGIP depending on operational conditions and system parameters. For the systems having a large aquifer and large distance between wells, it is best to start water dumpflood when the gas rate is still high since the long distance between the gas producer and dumpflood well helps delay water breakthrough at producer. As long as there is no early water breakthrough, the earlier the energy is supplied to the gas reservoirs, the better the gas recovery. On the other hand, for the systems having a small or moderate aquifer size and short distance between the two wells, performing water dumpflood when the rate is close to the economic rate is better because water is more likely to cause an early breakthrough when the distance is short. Water dumpflood into multiple nearly-depleted or depleted gas reservoirs is a novel study. The idea of using water dumpflood to increase gas recovery has been mentioned in the literature but has never been investigated. This detailed study will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost and risk.

Keywords: dumpflood, increase gas recovery, low-pressure gas reservoir, multiple gas reservoirs

Procedia PDF Downloads 427
889 Synthesis and Characterization of LiCoO2 Cathode Material by Sol-Gel Method

Authors: Nur Azilina Abdul Aziz, Tuti Katrina Abdullah, Ahmad Azmin Mohamad

Abstract:

Lithium-transition metals and some of their oxides, such as LiCoO2, LiMn2O2, LiFePO4, and LiNiO2 have been used as cathode materials in high performance lithium-ion rechargeable batteries. Among the cathode materials, LiCoO2 has potential to been widely used as a lithium-ion battery because of its layered crystalline structure, good capacity, high cell voltage, high specific energy density, high power rate, low self-discharge, and excellent cycle life. This cathode material has been widely used in commercial lithium-ion batteries due to its low irreversible capacity loss and good cycling performance. However, there are several problems that interfere with the production of material that has good electrochemical properties, including the crystallinity, the average particle size and particle size distribution. In recent years, synthesis of nanoparticles has been intensively investigated. Powders prepared by the traditional solid-state reaction have a large particle size and broad size distribution. On the other hand, solution method can reduce the particle size to nanometer range and control the particle size distribution. In this study, LiCoO2 was synthesized using the sol–gel preparation method, which Lithium acetate and Cobalt acetate were used as reactants. The stoichiometric amounts of the reactants were dissolved in deionized water. The solutions were stirred for 30 hours using magnetic stirrer, followed by heating at 80°C under vigorous stirring until a viscous gel was formed. The as-formed gel was calcined at 700°C for 7 h under a room atmosphere. The structural and morphological analysis of LiCoO2 was characterized using X-ray diffraction and Scanning electron microscopy. The diffraction pattern of material can be indexed based on the α-NaFeO2 structure. The clear splitting of the hexagonal doublet of (006)/(102) and (108)/(110) in this patterns indicates materials are formed in a well-ordered hexagonal structure. No impurity phase can be seen in this range probably due to the homogeneous mixing of the cations in the precursor. Furthermore, SEM micrograph of the LiCoO2 shows the particle size distribution is almost uniform while particle size is between 0.3-0.5 microns. In conclusion, LiCoO2 powder was successfully synthesized using the sol–gel method. LiCoO2 showed a hexagonal crystal structure. The sample has been prepared clearly indicate the pure phase of LiCoO2. Meanwhile, the morphology of the sample showed that the particle size and size distribution of particles is almost uniform.

Keywords: cathode material, LiCoO2, lithium-ion rechargeable batteries, Sol-Gel method

Procedia PDF Downloads 357
888 Possible Role of Fenofibrate and Clofibrate in Attenuated Cardioprotective Effect of Ischemic Preconditioning in Hyperlipidemic Rat Hearts

Authors: Gurfateh Singh, Mu Khan, Razia Khanam, Govind Mohan

Abstract:

Objective: The present study has been designed to investigate the beneficial role of Fenofibrate & Clofibrate in attenuated the cardioprotective effect of ischemic preconditioning (IPC) in hyperlipidemic rat hearts. Materials & Methods: Experimental hyperlipidemia was produced by feeding high fat diet to rats for a period of 28 days. Isolated langendorff’s perfused normal and hyperlipidemic rat hearts were subjected to global ischemia for 30 min followed by reperfusion for 120 min. The myocardial infarct size was assessed macroscopically using triphenyltetrazolium chloride staining. Coronary effluent was analyzed for lactate dehydrogenase (LDH) and creatine kinase-MB release to assess the extent of cardiac injury. Moreover, the oxidative stress in heart was assessed by measuring thiobarbituric acid reactive substance, superoxide anion generation and reduced form of glutathione. Results: The ischemia-reperfusion (I/R) has been noted to induce oxidative stress by increasing TBARS, superoxide anion generation and decreasing reduced form of glutathione in normal and hyperlipidemic rat hearts. Moreover, I/R produced myocardial injury, which was assessed in terms of increase in myocardial infarct size, LDH and CK-MB release in coronary effluent and decrease in coronary flow rate in normal and hyperlipidemic rat hearts. In addition, the hyperlipidemic rat hearts showed enhanced I/R-induced myocardial injury with high degree of oxidative stress as compared with normal rat hearts subjected to I/R. Four episodes of IPC (5 min each) afforded cardioprotection against I/R-induced myocardial injury in normal rat hearts as assessed in terms of improvement in coronary flow rate and reduction in myocardial infarct size, LDH, CK-MB and oxidative stress. On the other hand, IPC mediated myocardial protection against I/R-injury was abolished in hyperlipidemic rat hearts. However, Treatment with Fenofibrate (100 mg/kg/day, i.p.), Clofibrate (300mg/kg/day, i.p.) as a agonists of PPAR-α have not affected the cardioprotective effect of IPC in normal rat hearts, but its treatment markedly restored the cardioprotective potentials of IPC in hyperlipidemic rat hearts. Conclusion: It is noted that the high degree of oxidative stress produced in hyperlipidemic rat heart during reperfusion and consequent down regulation of PPAR-α may be responsible to abolish the cardioprotective potentials of IPC.

Keywords: Hyperlipidemia, ischemia-reperfusion injury, ischemic preconditioning, PPAR-α

Procedia PDF Downloads 274
887 Moderate Electric Field Influence on Carotenoids Extraction Time from Heterochlorella luteoviridis

Authors: Débora P. Jaeschke, Eduardo A. Merlo, Rosane Rech, Giovana D. Mercali, Ligia D. F. Marczak

Abstract:

Carotenoids are high value added pigments that can be alternatively extracted from some microalgae species. However, the application of carotenoids synthetized by microalgae is still limited due to the utilization of organic toxic solvents. In this context, studies involving alternative extraction methods have been conducted with more sustainable solvents to replace and reduce the solvent volume and the extraction time. The aim of the present work was to evaluate the extraction time of carotenoids from the microalgae Heterochlorella luteoviridis using moderate electric field (MEF) as a pre-treatment to the extraction. The extraction methodology consisted of a pre-treatment in the presence of MEF (180 V) and ethanol (25 %, v/v) for 10 min, followed by a diffusive step performed for 50 min using a higher ethanol concentration (75 %, v/v). The extraction experiments were conducted at 30 °C and, to keep the temperature at this value, it was used an extraction cell with a water jacket that was connected to a water bath. Also, to enable the evaluation of MEF effect on the extraction, control experiments were performed using the same cell and conditions without voltage application. During the extraction experiments, samples were withdrawn at 1, 5 and 10 min of the pre-treatment and at 1, 5, 30, 40 and 50 min of the diffusive step. Samples were, then, centrifuged and carotenoids analyses were performed in the supernatant. Furthermore, an exhaustive extraction with ethyl acetate and methanol was performed, and the carotenoids content found for this analyses was considered as the total carotenoids content of the microalgae. The results showed that the application of MEF as a pre-treatment to the extraction influenced the extraction yield and the extraction time during the diffusive step; after the MEF pre-treatment and 50 min of the diffusive step, it was possible to extract up to 60 % of the total carotenoids content. Also, results found for carotenoids concentration of the extracts withdrawn at 5 and 30 min of the diffusive step did not presented statistical difference, meaning that carotenoids diffusion occurs mainly in the very beginning of the extraction. On the other hand, the results for control experiments showed that carotenoids diffusion occurs mostly during 30 min of the diffusive step, which evidenced MEF effect on the extraction time. Moreover, carotenoids concentration on samples withdrawn during the pre-treatment (1, 5 and 10 min) were below the quantification limit of the analyses, indicating that the extraction occurred in the diffusive step, when ethanol (75 %, v/v) was added to the medium. It is possible that MEF promoted cell membrane permeabilization and, when ethanol (75 %) was added, carotenoids interacted with the solvent and the diffusion occurred easily. Based on the results, it is possible to infer that MEF promoted the decrease of carotenoids extraction time due to the increasing of the permeability of the cell membrane which facilitates the diffusion from the cell to the medium.

Keywords: moderate electric field (MEF), pigments, microalgae, ethanol

Procedia PDF Downloads 443
886 Life at the Fence: Lived Experiences of Navigating Cultural and Social Complexities among South Sudanese Refugees in Australia

Authors: Sabitra Kaphle, Rebecca Fanany, Jenny Kelly

Abstract:

Australia welcomes significant numbers of humanitarian arrivals every year with the commitment to provide equal opportunities and the resources required for integration into the new society. Over the last two decades, more than 24,000 South Sudanese people have come to call Australia home. Most of these refugees experienced several challenges whilesettlinginto the new social structures and service systems in Australia. The aim of the research is to explore the factors influencing social and cultural integration of South Sudanese refugees who have settled in Australia. Methodology: This studyused a phenomenological approach based on in-depth interviews designed to elicit the lived experiences of South Sudanese refugees settled in Australia. It applied the principles of narrative ethnography, allowing participants an opportunity to speak about themselves and their experiences of social and cultural integration-using their own words. Twenty-six participants were recruited to the study. Participants were long-term residents (over 10 years of settlement experience)who self-identified as refugees from South Sudan. Participants were given an opportunity to speak in the language of their choice, and interviews were conducted by a bilingual interviewer in their preferred language, time, and location. Interviews were recorded and transcribed verbatim and translated to Englishfor thematic analysis. Findings: Participants’ experiences portray the complexities of integrating into a new society due tothe daily challenges that South Sudaneserefugees face. Themes emerged from narrativesindicated that South Sudanese refugees express a high level of association with a Sudanese identity while demonstrating a significant level of integration into the Australian society. Despite this identity dilemma, these refugees show a high level of consensus about the experiencesof living in Australia that is closely associated with a group identity. In the process of maintaining identity andsocial affiliation, there are significant inter-generational cultural conflicts that participants experience in adapting to Australian society. It has been elucidated that identityconflict often emerges centeringon what constitutes authentic cultural practice as well as who is entitled to claim to be a member of the South Sudanese culture. Conclusions: Results of this study suggest that the cultural identity and social affiliations of South Sudanese refugees settling into Australian society are complex and multifaceted. While there are positive elements of theirintegration into the new society, inter-generational conflictsand identity confusion require further investigation to understand the context that will assist refugees to integrate more successfully into their new society. Given the length of stay of these refugees in Australia, government and settlement agencies may benefit from developing appropriate resources and process that are adaptive to the social and cultural context in which newly arrived refugees will live.

Keywords: cultural integration, inter-generational conflict, lived experiences, refugees, South sudanese

Procedia PDF Downloads 101
885 Physical Education Effect on Sports Science Analysis Technology

Authors: Peter Adly Hamdy Fahmy

Abstract:

The aim of the study was to examine the effects of a physical education program on student learning by combining the teaching of personal and social responsibility (TPSR) with a physical education model and TPSR with a traditional teaching model, these learning outcomes involving self-learning. -Study. Athletic performance, enthusiasm for sport, group cohesion, sense of responsibility and game performance. The participants were 3 secondary school physical education teachers and 6 physical education classes, 133 participants with students from the experimental group with 75 students and the control group with 58 students, and each teacher taught the experimental group and the control group for 16 weeks. The research methods used surveys, interviews and focus group meetings. Research instruments included the Personal and Social Responsibility Questionnaire, Sports Enthusiasm Scale, Group Cohesion Scale, Sports Self-Efficacy Scale, and Game Performance Assessment Tool. Multivariate analyzes of covariance and repeated measures ANOVA were used to examine differences in student learning outcomes between combining the TPSR with a physical education model and the TPSR with a traditional teaching model. The research findings are as follows: 1) The TPSR sports education model can improve students' learning outcomes, including sports self-efficacy, game performance, sports enthusiasm, team cohesion, group awareness and responsibility. 2) A traditional teaching model with TPSR could improve student learning outcomes, including sports self-efficacy, responsibility, and game performance. 3) The sports education model with TPSR could improve learning outcomes more than the traditional teaching model with TPSR, including sports self-efficacy, sports enthusiasm, responsibility and game performance. 4) Based on qualitative data on teachers' and students' learning experience, the physical education model with TPSR significantly improves learning motivation, group interaction and sense of play. The results suggest that physical education with TPSR could further improve learning outcomes in the physical education program. On the other hand, the hybrid model curriculum projects TPSR - Physical Education and TPSR - Traditional Education are good curriculum projects for moral character education that can be used in school physics.

Keywords: approach competencies, physical, education, teachers employment, graduate, physical education and sport sciences, SWOT analysis character education, sport season, game performance, sport competence

Procedia PDF Downloads 23
884 Debriefing Practices and Models: An Integrative Review

Authors: Judson P. LaGrone

Abstract:

Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.

Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education

Procedia PDF Downloads 135
883 The Impact of External Technology Acquisition and Exploitation on Firms' Process Innovation Performance

Authors: Thammanoon Charmjuree, Yuosre F. Badir, Umar Safdar

Abstract:

There is a consensus among innovation scholars that knowledge is a vital antecedent for firm’s innovation; e.g., process innovation. Recently, there has been an increasing amount of attention to more open approaches to innovation. This open model emphasizes the use of purposive flows of knowledge across the organization boundaries. Firms adopt open innovation strategy to improve their innovation performance by bringing knowledge into the organization (inbound open innovation) to accelerate internal innovation or transferring knowledge outside (outbound open innovation) to expand the markets for external use of innovation. Reviewing open innovation research reveals the following. First, the majority of existing studies have focused on inbound open innovation and less on outbound open innovation. Second, limited research has considered the possible interaction between both and how this interaction may impact the firm’s innovation performance. Third, scholars have focused mainly on the impact of open innovation strategy on product innovation and less on process innovation. Therefore, our knowledge of the relationship between firms’ inbound and outbound open innovation and how these two impact process innovation is still limited. This study focuses on the firm’s external technology acquisition (ETA) and external technology exploitation (ETE) and the firm’s process innovation performance. The ETA represents inbound openness in which firms rely on the acquisition and absorption of external technologies to complement their technology portfolios. The ETE, on the other hand, refers to commercializing technology assets exclusively or in addition to their internal application. This study hypothesized that both ETA and ETE have a positive relationship with process innovation performance and that ETE fully mediates the relationship between ETA and process innovation performance, i.e., ETA has a positive impact on ETE, and turn, ETE has a positive impact on process innovation performance. This study empirically explored these hypotheses in software development firms in Thailand. These firms were randomly selected from a list of Software firms registered with the Department of Business Development, Ministry of Commerce of Thailand. The questionnaires were sent to 1689 firms. After follow-ups and periodic reminders, we obtained 329 (19.48%) completed usable questionnaires. The structure question modeling (SEM) has been used to analyze the data. An analysis of the outcome of 329 firms provides support for our three hypotheses: First, the firm’s ETA has a positive impact on its process innovation performance. Second, the firm’s ETA has a positive impact its ETE. Third, the firm’s ETE fully mediates the relationship between the firm’s ETA and its process innovation performance. This study fills up the gap in open innovation literature by examining the relationship between inbound (ETA) and outbound (ETE) open innovation and suggest that in order to benefits from the promises of openness, firms must engage in both. The study went one step further by explaining the mechanism through which ETA influence process innovation performance.

Keywords: process innovation performance, external technology acquisition, external technology exploitation, open innovation

Procedia PDF Downloads 182
882 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 267
881 Validation of Global Ratings in Clinical Performance Assessment

Authors: S. J. Yune, S. Y. Lee, S. J. Im, B. S. Kam, S. Y. Baek

Abstract:

This study aimed to determine the reliability of clinical performance assessments, having been emphasized by ability-based education, and professors overall assessment methods. We addressed the following problems: First, we try to find out whether there is a difference in what we consider to be the main variables affecting the clinical performance test according to the evaluator’s working period and the number of evaluation experience. Second, we examined the relationship among the global rating score (G), analytic global rating score (Gc), and the sum of the analytical checklists (C). What are the main factors affecting clinical performance assessments in relation to the numbers of times the evaluator had administered evaluations and the length of their working period service? What is the relationship between overall assessment score and analytic checklist score? How does analytic global rating with 6 components in OSCE and 4 components in sub-domains (Gc) CPX: aseptic practice, precision, systemic approach, proficiency, successfulness, and attitude overall assessment score and task-specific analytic checklist score sum (C) affect the professor’s overall global rating assessment score (G)? We studied 75 professors who attended a 2016 Bugyeoung Consortium clinical skills performances test evaluating third and fourth year medical students at the Pusan National University Medical school in South Korea (39 prof. in OSCE, 36 prof. in CPX; all consented to participate in our study). Each evaluator used 3 forms; a task-specific analytic checklist, subsequent analytic global rating scale with sub-6 domains, and overall global scale. After the evaluation, the professors responded to the questionnaire on the important factors of clinical performance assessment. The data were analyzed by frequency analysis, correlation analysis, and hierarchical regression analysis using SPSS 21.0. Their understanding of overall assessment was analyzed by dividing the subjects into groups based on experiences. As a result, they considered ‘precision’ most important in overall OSCE assessment, and ‘precise accuracy physical examination’, ‘systemic approaches to taking patient history’, and ‘diagnostic skill capability’ in overall CPX assessment. For OSCE, there was no clear difference of opinion about the main factors, but there was for CPX. Analytic global rating scale score, overall rating scale score, and analytic checklist score had meaningful mutual correlations. According to the regression analysis results, task-specific checklist score sum had the greatest effect on overall global rating. professors regarded task-specific analytic checklist total score sum as best reflecting overall OSCE test score, followed by aseptic practice, precision, systemic approach, proficiency, successfulness, and attitude on a subsequent analytic global rating scale. For CPX, subsequent analytic global rating scale score, overall global rating scale score, and task-specific checklist score had meaningful mutual correlations. These findings support explanations for validity of professors’ global rating in clinical performance assessment.

Keywords: global rating, clinical performance assessment, medical education, analytic checklist

Procedia PDF Downloads 219
880 Determining the Effective Substance of Cottonseed Extract on the Treatment of Leishmaniasis

Authors: Mehrosadat Mirmohammadi, Sara Taghdisi, Ali Padash, Mohammad Hossein Pazandeh

Abstract:

Gossypol, a yellowish anti-nutritional compound found in cotton plants, exists in various plant parts, including seeds, husks, leaves, and stems. Chemically, gossypol is a potent polyphenolic aldehyde with antioxidant and therapeutic properties. However, its free form can be toxic, posing risks to both humans and animals. Initially, we extracted gossypol from cotton seeds using n-hexane as a solvent (yield: 84.0 ± 4.0%). We also obtained cotton seed and cotton boll extracts via Soxhlet extraction (25:75 hydroalcoholic ratio). These extracts, combined with cornstarch, formed four herbal medicinal formulations. Ethical approval allowed us to investigate their effects on Leishmania-caused skin wounds, comparing them to glucantime (local ampoule). Herbal formulas outperformed the control group (ethanol only) in wound treatment (p-value 0.05). The average wound diameter after two months did not significantly differ between plant extract ointments and topical glucantime. Notably, cotton boll extract with 1% extra gossypol crystal showed the best therapeutic effect. We extracted gossypol from cotton seeds using n-hexane via Soxhlet extraction. Saponification, acidification, and recrystallization steps followed. FTIR, UV-Vis, and HPLC analyses confirmed the product’s identity. Herbal medicines from cotton seeds effectively treated chronic wounds compared to the ethanol-only control group. Wound diameter differed significantly between extract ointments and glucantime injections. It seems that due to the presence of large amounts of fat in the oil, the extraction of gossypol from it faces many obstacles. The extraction of this compound with our technique showed that extraction from oil has a higher efficiency, perhaps because of the preparation of oil by cold pressing method, the possibility of losing this compound is much less than when extraction is done with Soxhlet. On the other hand, the gossypol in the oil is mostly bound to the protein, which somehow protects the gossypol until the last stage of the extraction process. Since this compound is very sensitive to light and heat, it was extracted as a derivative with acetic acid. Also, in the treatment section, it was found that the ointment prepared with the extract is more effective and Gossypol is one of the effective ingredients in the treatment. Therefore, gossypol can be extracted from the oil and added to the extract from which gossypol has been extracted to make an effective medicine with a certain dose.

Keywords: cottonseed, glucantime, gossypol, leishmaniasis

Procedia PDF Downloads 37
879 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups

Authors: Sakshi Bhalla

Abstract:

On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.

Keywords: communication, emoji, language, Twitter

Procedia PDF Downloads 80
878 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis

Authors: Coriolano Salvini, Ambra Giovannelli

Abstract:

The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.

Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.

Procedia PDF Downloads 209
877 Residual Plastic Deformation Capacity in Reinforced Concrete Beams Subjected to Drop Weight Impact Test

Authors: Morgan Johansson, Joosef Leppanen, Mathias Flansbjer, Fabio Lozano, Josef Makdesi

Abstract:

Concrete is commonly used for protective structures and how impact loading affects different types of concrete structures is an important issue. Often the knowledge gained from static loading is also used in the design of impulse loaded structures. A large plastic deformation capacity is essential to obtain a large energy absorption in an impulse loaded structure. However, the structural response of an impact loaded concrete beam may be very different compared to a statically loaded beam. Consequently, the plastic deformation capacity and failure modes of the concrete structure can be different when subjected to dynamic loads; and hence it is not sure that the observations obtained from static loading are also valid for dynamic loading. The aim of this paper is to investigate the residual plastic deformation capacity in reinforced concrete beams subjected to drop weight impact tests. A test-series consisting of 18 simply supported beams (0.1 x 0.1 x 1.18 m, ρs = 0.7%) with a span length of 1.0 m and subjected to a point load in the beam mid-point, was carried out. 2x6 beams were first subjected to drop weight impact tests, and thereafter statically tested until failure. The drop in weight had a mass of 10 kg and was dropped from 2.5 m or 5.0 m. During the impact tests, a high-speed camera was used with 5 000 fps and for the static tests, a camera was used with 0.5 fps. Digital image correlation (DIC) analyses were conducted and from these the velocities of the beam and the drop weight, as well as the deformations and crack propagation of the beam, were effectively measured. Additionally, for the static tests, the applied load and midspan deformation were measured. The load-deformation relations for the beams subjected to an impact load were compared with 6 reference beams that were subjected to static loading only. The crack pattern obtained were compared using DIC, and it was concluded that the resulting crack formation depended much on the test method used. For the static tests, only bending cracks occurred. For the impact loaded beams, though, distinctive diagonal shear cracks also formed below the zone of impact and less wide shear cracks were observed in the region half-way to the support. Furthermore, due to wave propagation effects, bending cracks developed in the upper part of the beam during initial loading. The results showed that the plastic deformation capacity increased for beams subjected to drop weight impact tests from a high drop height of 5.0 m. For beams subjected to an impact from a low drop height of 2.5 m, though, the plastic deformation capacity was in the same order of magnitude as for the statically loaded reference beams. The beams tested were designed to fail due to bending when subjected to a static load. However, for the impact tested beams, one beam exhibited a shear failure at a significantly reduced load level when it was tested statically; indicating that there might be a risk of reduced residual load capacity for impact loaded structures.

Keywords: digital image correlation (DIC), drop weight impact, experiments, plastic deformation capacity, reinforced concrete

Procedia PDF Downloads 129
876 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data

Authors: Minjuan Sun

Abstract:

Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.

Keywords: credit score, digital footprint, Fintech, machine learning

Procedia PDF Downloads 141
875 The Effect of Articial Intelligence on Physical Education Analysis and Sports Science

Authors: Peter Adly Hamdy Fahmy

Abstract:

The aim of the study was to examine the effects of a physical education program on student learning by combining the teaching of personal and social responsibility (TPSR) with a physical education model and TPSR with a traditional teaching model, these learning outcomes involving self-learning. -Study. Athletic performance, enthusiasm for sport, group cohesion, sense of responsibility and game performance. The participants were 3 secondary school physical education teachers and 6 physical education classes, 133 participants with students from the experimental group with 75 students and the control group with 58 students, and each teacher taught the experimental group and the control group for 16 weeks. The research methods used surveys, interviews and focus group meetings. Research instruments included the Personal and Social Responsibility Questionnaire, Sports Enthusiasm Scale, Group Cohesion Scale, Sports Self-Efficacy Scale, and Game Performance Assessment Tool. Multivariate analyzes of covariance and repeated measures ANOVA were used to examine differences in student learning outcomes between combining the TPSR with a physical education model and the TPSR with a traditional teaching model. The research findings are as follows: 1) The TPSR sports education model can improve students' learning outcomes, including sports self-efficacy, game performance, sports enthusiasm, team cohesion, group awareness and responsibility. 2) A traditional teaching model with TPSR could improve student learning outcomes, including sports self-efficacy, responsibility, and game performance. 3) The sports education model with TPSR could improve learning outcomes more than the traditional teaching model with TPSR, including sports self-efficacy, sports enthusiasm, responsibility and game performance. 4) Based on qualitative data on teachers' and students' learning experience, the physical education model with TPSR significantly improves learning motivation, group interaction and sense of play. The results suggest that physical education with TPSR could further improve learning outcomes in the physical education program. On the other hand, the hybrid model curriculum projects TPSR - Physical Education and TPSR - Traditional Education are good curriculum projects for moral character education that can be used in school physics.

Keywords: approach competencies, physical, education, teachers employment, graduate, physical education and sport sciences, SWOT analysis character education, sport season, game performance, sport competence

Procedia PDF Downloads 44
874 Analysis of Shrinkage Effect during Mercerization on Himalayan Nettle, Cotton and Cotton/Nettle Yarn Blends

Authors: Reena Aggarwal, Neha Kestwal

Abstract:

The Himalayan Nettle (Girardinia diversifolia) has been used for centuries as fibre and food source by Himalayan communities. Himalayan Nettle is a natural cellulosic fibre that can be handled in the same way as other cellulosic fibres. The Uttarakhand Bamboo and Fibre Development Board based in Uttarakhand, India is working extensively with the nettle fibre to explore the potential of nettle for textile production in the region. The fiber is a potential resource for rural enterprise development for some high altitude pockets of the state and traditionally the plant fibre is used for making domestic products like ropes and sacks. Himalayan Nettle is an unconventional natural fiber with functional characteristics of shrink resistance, degree of pathogen and fire resistance and can blend nicely with other fibres. Most importantly, they generate mainly organic wastes and leave residues that are 100% biodegradable. The fabrics may potentially be reused or re-manufactured and can also be used as a source of cellulose feedstock for regenerated cellulosic products. Being naturally bio- degradable, the fibre can be composted if required. Though a lot of research activities and training are directed towards fibre extraction and processing techniques in different craft clusters villagers of different clusters of Uttarkashi, Chamoli and Bageshwar of Uttarakhand like retting and Degumming process, very little is been done to analyse the crucial properties of nettle fiber like shrinkage and wash fastness. These properties are very crucial to obtain desired quality of fibre for further processing of yarn making and weaving and in developing these fibers into fine saleable products. This research therefore is focused towards various on-field experiments which were focused on shrinkage properties conducted on cotton, nettle and cotton/nettle blended yarn samples. The objective of the study was to analyze the scope of the blended fiber for developing into wearable fabrics. For the study, after conducting the initial fiber length and fineness testing, cotton and nettle fibers were mixed in 60:40 ratio and five varieties of yarns were spun in open end spinning mill having yarn count of 3s, 5s, 6s, 7s and 8s. Samples of 100% Nettle 100% cotton fibers in 8s count were also developed for the study. All the six varieties of yarns were tested with shrinkage test and results were critically analyzed as per ASTM method D2259. It was observed that 100% Nettle has a least shrinkage of 3.36% while pure cotton has shrinkage approx. 13.6%. Yarns made of 100% Cotton exhibits four times more shrinkage than 100% Nettle. The results also show that cotton and Nettle blended yarn exhibit lower shrinkage than 100% cotton yarn. It was thus concluded that as the ratio of nettle increases in the samples, the shrinkage decreases in the samples. These results are very crucial for Uttarakhand people who want to commercially exploit the abundant nettle fiber for generating sustainable employment.

Keywords: Himalayan nettle, sustainable, shrinkage, blending

Procedia PDF Downloads 217
873 Coupling Strategy for Multi-Scale Simulations in Micro-Channels

Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier

Abstract:

With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.

Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling

Procedia PDF Downloads 153
872 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 361