Search results for: applied linguistics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8390

Search results for: applied linguistics

1640 Seismic Isolation of Existing Masonry Buildings: Recent Case Studies in Italy

Authors: Stefano Barone

Abstract:

Seismic retrofit of buildings through base isolation represents a consolidated protection strategy against earthquakes. It consists in decoupling the ground motion from that of the structure and introducing anti-seismic devices at the base of the building, characterized by high horizontal flexibility and medium/high dissipative capacity. This allows to protect structural elements and to limit damages to non-structural ones. For these reasons, full functionality is guaranteed after an earthquake event. Base isolation is applied extensively to both new and existing buildings. For the latter, it usually does not require any interruption of the structure use and occupants evacuation, a special advantage for strategic buildings such as schools, hospitals, and military buildings. This paper describes the application of seismic isolation to three existing masonry buildings in Italy: Villa “La Maddalena” in Macerata (Marche region), “Giacomo Matteotti” and “Plinio Il Giovane” school buildings in Perugia (Umbria region). The seismic hazard of the sites is characterized by a Peak Ground Acceleration (PGA) of 0.213g-0.287g for the Life Safety Limit State and between 0.271g-0.359g for the Collapse Limit State. All the buildings are isolated with a combination of free sliders type TETRON® CD with confined elastomeric disk and anti-seismic rubber isolators type ISOSISM® HDRB to reduce the eccentricity between the center of mass and stiffness, thus limiting torsional effects during a seismic event. The isolation systems are designed to lengthen the original period of vibration (i.e., without isolators) by at least three times and to guarantee medium/high levels of energy dissipation capacity (equivalent viscous damping between 12.5% and 16%). This allows the structures to resist 100% of the seismic design action. This article shows the performances of the supplied anti-seismic devices with particular attention to the experimental dynamic response. Finally, a special focus is given to the main site activities required to isolate a masonry building.

Keywords: retrofit, masonry buildings, seismic isolation, energy dissipation, anti-seismic devices

Procedia PDF Downloads 42
1639 Analysis of Friction Stir Welding Process for Joining Aluminum Alloy

Authors: A. M. Khourshid, I. Sabry

Abstract:

Friction Stir Welding (FSW), a solid state joining technique, is widely being used for joining Al alloys for aerospace, marine automotive and many other applications of commercial importance. FSW were carried out using a vertical milling machine on Al 5083 alloy pipe. These pipe sections are relatively small in diameter, 5mm, and relatively thin walled, 2 mm. In this study, 5083 aluminum alloy pipe were welded as similar alloy joints using (FSW) process in order to investigate mechanical and microstructural properties .rotation speed 1400 r.p.m and weld speed 10,40,70 mm/min. In order to investigate the effect of welding speeds on mechanical properties, metallographic and mechanical tests were carried out on the welded areas. Vickers hardness profile and tensile tests of the joints as a metallurgical feasibility of friction stir welding for joining Al 6061 aluminum alloy welding was performed on pipe with different thickness 2, 3 and 4 mm,five rotational speeds (485,710,910,1120 and 1400) rpm and a traverse speed (4, 8 and 10)mm/min was applied. This work focuses on two methods such as artificial neural networks using software (pythia) and response surface methodology (RSM) to predict the tensile strength, the percentage of elongation and hardness of friction stir welded 6061 aluminum alloy. An artificial neural network (ANN) model was developed for the analysis of the friction stir welding parameters of 6061 pipe. The tensile strength, the percentage of elongation and hardness of weld joints were predicted by taking the parameters Tool rotation speed, material thickness and travel speed as a function. A comparison was made between measured and predicted data. Response surface methodology (RSM) also developed and the values obtained for the response Tensile strengths, the percentage of elongation and hardness are compared with measured values. The effect of FSW process parameter on mechanical properties of 6061 aluminum alloy has been analyzed in detail.

Keywords: friction stir welding (FSW), al alloys, mechanical properties, microstructure

Procedia PDF Downloads 427
1638 Exploring the Dynamic Identities of Multilingual Adolescents in Contexts of L3+ Learning in Four European Sites

Authors: Harper Staples

Abstract:

A necessary outcome of today’s contemporary globalised reality, current views of multilingualism hold that it no longer represents the exception, but rather the rule. As such, the simultaneous acquisition of multiple languages represents a common experience for many of today's students and therefore represents a key area of inquiry in the domain of foreign language learner identity. Second and multilingual language acquisition processes parallel each other in many ways; however, there are differences to be found in the ways in which a student may learn a third language. A multilingual repertoire will have to negotiate complex change as language competencies dynamically evolve; moreover, this process will vary according to the contextual factors attributed to a unique learner. A developing multilingual identity must, therefore, contend with an array of potential challenges specific to the individual in question. Despite an overarching recognition in the literature that pluri-language acquisition represents a unique field of inquiry within applied linguistic research, there is a paucity of empirical work which examines the ways in which individuals construct a sense of their own identity as multilingual speakers in such contexts of learning. This study explores this phenomenon via a mixed-methods, comparative case study approach at four school sites based in Finland, France, Wales, and England. It takes a strongly individual-in-context view, conceptualising each adolescent participant in dynamic terms in order to undertake a holistic exploration of the myriad factors that might impact upon, and indeed be impacted by, a learner's developing multilingual identity. Emerging themes of note thus far suggest that, beyond the expected divergences in the experience of multilinguality at the individual level, there are contradictions in the way in which adolescent students in each site 'claim' their plurilingualism. This can be argued to be linked to both meso and macro-level factors, including the foreign language curriculum and, more broadly, societal attitudes towards multilingualism. These diverse emergent identifications have implications not only for attainment in the foreign language but also for student well-being more generally.

Keywords: foreign language learning, student identity, multilingualism, educational psychology

Procedia PDF Downloads 137
1637 Arterial Compliance Measurement Using Split Cylinder Sensor/Actuator

Authors: Swati Swati, Yuhang Chen, Robert Reuben

Abstract:

Coronary stents are devices resembling the shape of a tube which are placed in coronary arteries, to keep the arteries open in the treatment of coronary arterial diseases. Coronary stents are routinely deployed to clear atheromatous plaque. The stent essentially applies an internal pressure to the artery because its structure is cylindrically symmetrical and this may introduce some abnormalities in final arterial shape. The goal of the project is to develop segmented circumferential arterial compliance measuring devices which can be deployed (eventually) in vivo. The segmentation of the device will allow the mechanical asymmetry of any stenosis to be assessed. The purpose will be to assess the quality of arterial tissue for applications in tailored stents and in the assessment of aortic aneurism. Arterial distensibility measurement is of utmost importance to diagnose cardiovascular diseases and for prediction of future cardiac events or coronary artery diseases. In order to arrive at some generic outcomes, a preliminary experimental set-up has been devised to establish the measurement principles for the device at macro-scale. The measurement methodology consists of a strain gauge system monitored by LABVIEW software in a real-time fashion. This virtual instrument employs a balloon within a gelatine model contained in a split cylinder with strain gauges fixed on it. The instrument allows automated measurement of the effect of air-pressure on gelatine and measurement of strain with respect to time and pressure during inflation. Compliance simple creep model has been applied to the results for the purpose of extracting some measures of arterial compliance. The results obtained from the experiments have been used to study the effect of air pressure on strain at varying time intervals. The results clearly demonstrate that with decrease in arterial volume and increase in arterial pressure, arterial strain increases thereby decreasing the arterial compliance. The measurement system could lead to development of portable, inexpensive and small equipment and could prove to be an efficient automated compliance measurement device.

Keywords: arterial compliance, atheromatous plaque, mechanical symmetry, strain measurement

Procedia PDF Downloads 244
1636 Microwave-Assisted Alginate Extraction from Portuguese Saccorhiza polyschides – Influence of Acid Pretreatment

Authors: Mário Silva, Filipa Gomes, Filipa Oliveira, Simone Morais, Cristina Delerue-Matos

Abstract:

Brown seaweeds are abundant in Portuguese coastline and represent an almost unexploited marine economic resource. One of the most common species, easily available for harvesting in the northwest coast, is Saccorhiza polyschides grows in the lowest shore and costal rocky reefs. It is almost exclusively used by local farmers as natural fertilizer, but contains a substantial amount of valuable compounds, particularly alginates, natural biopolymers of high interest for many industrial applications. Alginates are natural polysaccharides present in cell walls of brown seaweed, highly biocompatible, with particular properties that make them of high interest for the food, biotechnology, cosmetics and pharmaceutical industries. Conventional extraction processes are based on thermal treatment. They are lengthy and consume high amounts of energy and solvents. In recent years, microwave-assisted extraction (MAE) has shown enormous potential to overcome major drawbacks that outcome from conventional plant material extraction (thermal and/or solvent based) techniques, being also successfully applied to the extraction of agar, fucoidans and alginates. In the present study, acid pretreatment of brown seaweed Saccorhiza polyschides for subsequent microwave-assisted extraction (MAE) of alginate was optimized. Seaweeds were collected in Northwest Portuguese coastal waters of the Atlantic Ocean between May and August, 2014. Experimental design was used to assess the effect of temperature and acid pretreatment time in alginate extraction. Response surface methodology allowed the determination of the optimum MAE conditions: 40 mL of HCl 0.1 M per g of dried seaweed with constant stirring at 20ºC during 14h. Optimal acid pretreatment conditions have enhanced significantly MAE of alginates from Saccorhiza polyschides, thus contributing for the development of a viable, more environmental friendly alternative to conventional processes.

Keywords: acid pretreatment, alginate, brown seaweed, microwave-assisted extraction, response surface methodology

Procedia PDF Downloads 347
1635 Presenting of 'Local Wishes Map' as a Tool for Promoting Dialogue and Developing Healthy Cities

Authors: Ana Maria G. Sperandio, Murilo U. Malek-Zadeh, João Luiz de S. Areas, Jussara C. Guarnieri

Abstract:

Intersectoral governance is a requirement for developing healthy cities. However, this achievement is difficult to be succeeded, especially in regions at low resources condition. Therefore, it was developed a cheap investigative procedure to diagnose sectoral wishes related to urban planning and health promotion. This procedure is composed of two phases, which can be applied to different groups in order to compare the results. The first phase is a conversation guided by a list of questions. Some of those questions aim to gather information about how individuals understand concepts such as healthy city or a health promotion and what they believe that constitutes the relation between urban planning and urban health. Other questions investigate local issues, and how citizens would like to promote dialogue between sectors. At second phase individuals stand around the investigated city (or city region) map and are asked to represent their wishes on it. They can represent it by writing text notations or inserting icons on it, with the latter representing a city element, for example, some trees, a square, a playground, a hospital, a cycle track. After groups had represented their wishes, the map can be photographed, and then the results from distinct groups can be compared. This procedure was conducted at a small city in Brazil (Holambra), in 2017 which is the first out of four years of the mayor’s term. The prefecture asked for this tool in order to make Holambra become a city of Potential Healthy Municipalities Network in Brazil. Two sectors were investigated: the government and the urban population. By the end of our investigation, the intersection from the group (i.e., population and government) maps was accounted for creating a map of common wishes. Therefore, the material produced can be used as a guide for promoting dialogue between sectors and as a tool of monitoring politics progress. The report of this procedure was directed to public managers, so they could see the common wishes between themselves and local populations, and use this tool as a guide for creating urban politics which intends to enhance health promotion and to develop a healthy city, even at low resources condition.

Keywords: governance, health promotion, intersectorality, urban planning

Procedia PDF Downloads 110
1634 Assessing the Impact of the Rome II Regulation's General Rule on Cross-Border Road Traffic Accidents: A Critique of Recent Case Law

Authors: Emma Roberts

Abstract:

The Rome II Regulation has established a uniform regime of conflict of law rules across the European Union (except for Denmark) which determines the law applicable in non-contractual obligations disputes. It marks a significant development towards the Europeanization of private international law and aims to provide the most appropriate connecting factors to achieve both legal certainty and justice in individual cases. Many non-contractual obligations are recognised to present such distinct factors that, to achieve these aims, a special rule is provided for determining the applicable law in cases in respect of product liability and environmental torts, for example. Throughout the legislative process, the European Parliament sought to establish a separate rule for road traffic accidents, recognising that these cases too present such novel situations that a blanket application of a lex loci damni approach would not provide an appropriate answer. Such attempts were rejected and, as a result, cases arising out of road traffic accidents are subject to the Regulation’s general lex loci damni rule along with its escape clause and limited exception. This paper offers a critique of the Regulation’s response to cross-border road traffic accident cases. In England and Wales, there have been few cases that have applied the Regulation’s provisions to date, but significantly the majority of such cases are in respect of road traffic accidents. This paper examines the decisions in those cases and challenges the legislators’ decision not to provide a special rule for such incidences. Owing to the diversity in compensation systems globally, applying the Regulation’s general rule to cases of road traffic accidents – given the breadth of matters that are to be subject to the lex cause – cannot ensure an outcome that provides ‘justice in individual cases’ as is assured by the Regulation's recitals. Not only does this paper suggest that the absence of a special rule for road traffic accidents means that the Regulation fails to achieve one of its principal aims, but it further makes out a compelling case for the legislative body of the European Union to implement a corrective instrument.

Keywords: accidents abroad, applicable law, cross-border torts, non-contractual obligations, road traffic accidents

Procedia PDF Downloads 230
1633 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 72
1632 Teachers’ Protective Factors of Resilience Scale: Factorial Structure, Validity and Reliability Issues

Authors: Athena Daniilidou, Maria Platsidou

Abstract:

Recently developed scales addressed -specifically- teachers’ resilience. Although they profited from the field, they do not include some of the critical protective factors of teachers’ resilience identified in the literature. To address this limitation, we aimed at designing a more comprehensive scale for measuring teachers' resilience which encompasses various personal and environmental protective factors. To this end, two studies were carried out. In Study 1, 407 primary school teachers were tested with the new scale, the Teachers’ Protective Factors of Resilience Scale (TPFRS). Similar scales, such as the Multidimensional Teachers’ Resilience Scale and the Teachers’ Resilience Scale), were used to test the convergent validity, while the Maslach Burnout Inventory and the Teachers’ Sense of Efficacy Scale was used to assess the discriminant validity of the new scale. The factorial structure of the TPFRS was checked with confirmatory factor analysis and a good fit of the model to the data was found. Next, item response theory analysis using a two-parameter model (2PL) was applied to check the items within each factor. It revealed that 9 items did not fit the corresponding factors well and they were removed. The final version of the TPFRS includes 29 items, which assess six protective factors of teachers’ resilience: values and beliefs (5 items, α=.88), emotional and behavioral adequacy (6 items, α=.74), physical well-being (3 items, α=.68), relationships within the school environment, (6 items, α=.73) relationships outside the school environment (5 items, α=.84), and the legislative framework of education (4 items, α=.83). Results show that it presents a satisfactory convergent and discriminant validity. Study 2, in which 964 primary and secondary school teachers were tested, confirmed the factorial structure of the TPFRS as well as its discriminant validity, which was tested with the Schutte Emotional Intelligence Scale-Short Form. In conclusion, our results confirmed that the TPFRS is a valid instrument for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession. In conclusion, our results showed that the TPFRS is a new multi-dimensional instrument valid for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession.

Keywords: resilience, protective factors, teachers, item response theory

Procedia PDF Downloads 56
1631 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design

Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian

Abstract:

Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.

Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.

Procedia PDF Downloads 252
1630 Synthesis and Characterization of Sulfonated Aromatic Hydrocarbon Polymers Containing Trifluoromethylphenyl Side Chain for Proton Exchange Membrane Fuel Cell

Authors: Yi-Chiang Huang, Hsu-Feng Lee, Yu-Chao Tseng, Wen-Yao Huang

Abstract:

Proton exchange membranes as a key component in fuel cells have been widely studying over the past few decades. As proton exchange, membranes should have some main characteristics, such as good mechanical properties, low oxidative stability and high proton conductivity. In this work, trifluoromethyl groups had been introduced on polymer backbone and phenyl side chain which can provide densely located sulfonic acid group substitution and also promotes solubility, thermal and oxidative stability. Herein, a series of novel sulfonated aromatic hydrocarbon polyelectrolytes was synthesized by polycondensation of 4,4''''-difluoro-3,3''''- bis(trifluoromethyl)-2'',3''-bis(3-(trifluoromethyl)phenyl)-1,1':4',1'':4'',1''':4''',1''''-quinquephenyl with 2'',3''',5'',6''-tetraphenyl-[1,1':4',1'': 4'',1''':4''',1''''-quinquephenyl]-4,4''''-diol and post-sulfonated was through chlorosulfonic acid to given sulfonated polymers (SFC3-X) possessing ion exchange capacities ranging from 1.93, 1.91 and 2.53 mmol/g. ¹H NMR and FT-IR spectroscopy were applied to confirm the structure and composition of sulfonated polymers. The membranes exhibited considerably dimension stability (10-27.8% in length change; 24-56.5% in thickness change) and excellent oxidative stability (weight remain higher than 97%). The mechanical properties of membranes demonstrated good tensile strength on account of the high rigidity multi-phenylated backbone. Young's modulus were ranged 0.65-0.77GPa which is much larger than that of Nafion 211 (0.10GPa). Proton conductivities of membranes ranged from 130 to 240 mS/cm at 80 °C under fully humidified which were comparable or higher than that of Nafion 211 (150 mS/cm). The morphology of membranes was investigated by transmission electron microscopy which demonstrated a clear hydrophilic/hydrophobic phase separation with spherical ionic clusters in the size range of 5-20 nm. The SFC3-1.97 single fuel cell performance demonstrates the maximum power density at 1.08W/cm², and Nafion 211 was 1.24W/cm² as a reference in this work. The result indicated that SFC3-X are good candidates for proton exchange membranes in fuel cell applications. Fuel cell of other membranes is under testing.

Keywords: fuel cells, polyelectrolyte, proton exchange membrane, sulfonated polymers

Procedia PDF Downloads 423
1629 Developing Manufacturing Process for the Graphene Sensors

Authors: Abdullah Faqihi, John Hedley

Abstract:

Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.

Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy

Procedia PDF Downloads 91
1628 Project-Based Learning Application: Applying Systems Thinking Concepts to Assure Continuous Improvement

Authors: Kimberley Kennedy

Abstract:

The major findings of this study discuss the importance of understanding and applying Systems thinking concepts to ensure an effective Project-Based Learning environment. A pilot project study of a major pedagogical change was conducted over a five year period with the goal to give students real world, hands-on learning experiences and the opportunity to apply what they had learned over the past two years of their business program. The first two weeks of the fifteen week semester utilized teaching methods of lectures, guest speakers and design thinking workshops to prepare students for the project work. For the remaining thirteen weeks of the semester, the students worked with actual business owners and clients on projects and challenges. The first three years of the five year study focused on student feedback to ensure a quality learning experience and continuous improvement process was developed. The final two years of the study, examined the conceptual understanding and perception of learning and teaching by faculty using Project-Based Learning pedagogy as compared to lectures and more traditional teaching methods was performed. Relevant literature was reviewed and data collected from program faculty participants who completed pre-and post-semester interviews and surveys over a two year period. Systems thinking concepts were applied to better understand the challenges for faculty using Project-Based Learning pedagogy as compared to more traditional teaching methods. Factors such as instructor and student fatigue, motivation, quality of work and enthusiasm were explored to better understand how to provide faculty with effective support and resources when using Project-Based Learning pedagogy as the main teaching method. This study provides value by presenting generalizable, foundational knowledge by offering suggestions for practical solutions to assure student and teacher engagement in Project-Based Learning courses.

Keywords: continuous improvement, project-based learning, systems thinking, teacher engagement

Procedia PDF Downloads 98
1627 The Amount of Conformity of Persian Subject Headlines with Users' Social Tagging

Authors: Amir Reza Asnafi, Masoumeh Kazemizadeh, Najmeh Salemi

Abstract:

Due to the diversity of information resources in the web0.2 environment, which is increasing in number from time to time, the social tagging system should be used to discuss Internet resources. Studying the relevance of social tags to thematic headings can help enrich resources and make them more accessible to resources. The present research is of applied-theoretical type and research method of content analysis. In this study, using the listing method and content analysis, the level of accurate, approximate, relative, and non-conformity of social labels of books available in the field of information science and bibliography of Kitabrah website with Persian subject headings was determined. The exact matching of subject headings with social tags averaged 22 items, the approximate matching of subject headings with social tags averaged 36 items, the relative matching of thematic headings with social tags averaged 36 social items, and the average matching titles did not match the title. The average is 116. According to the findings, the exact matching of subject headings with social labels is the lowest and the most inconsistent. This study showed that the average non-compliance of subject headings with social labels is even higher than the sum of the three types of exact, relative, and approximate matching. As a result, the relevance of thematic titles to social labels is low. Due to the fact that the subject headings are in the form of static text and users are not allowed to interact and insert new selected words and topics, and on the other hand, in websites based on Web 2 and based on the social classification system, this possibility is available for users. An important point of the present study and the studies that have matched the syntactic and semantic matching of social labels with thematic headings is that the degree of conformity of thematic headings with social labels is low. Therefore, these two methods can complement each other and create a hybrid cataloging that includes subject headings and social tags. The low level of conformity of thematic headings with social tags confirms the results of backgrounds and writings that have compared the social tags of books with the thematic headings of the Library of Congress. It is not enough to match social labels with thematic headings. It can be said that these two methods can be complementary.

Keywords: Web 2/0, social tags, subject headings, hybrid cataloging

Procedia PDF Downloads 134
1626 Structural and Functional Comparison of Untagged and Tagged EmrE Protein

Authors: S. Junaid S. Qazi, Denice C. Bay, Raymond Chew, Raymond J. Turner

Abstract:

EmrE, a member of the small multidrug resistance protein family in bacteria is considered to be the archetypical member of its family. It confers host resistance to a wide variety of quaternary cation compounds (QCCs) driven by proton motive force. Generally, purification yield is a challenge in all membrane proteins because of the difficulties in their expression, isolation and solubilization. EmrE is extremely hydrophobic which make the purification yield challenging. We have purified EmrE protein using two different approaches: organic solvent membrane extraction and hexahistidine (his6) tagged Ni-affinity chromatographic methods. We have characterized changes present between ligand affinity of untagged and his6-tagged EmrE proteins in similar membrane mimetic environments using biophysical experimental techniques. Purified proteins were solubilized in a buffer containing n-dodecyl-β-D-maltopyranoside (DDM) and the conformations in the proteins were explored in the presence of four QCCs, methyl viologen (MV), ethidium bromide (EB), cetylpyridinium chloride (CTP) and tetraphenyl phosphonium (TPP). SDS-Tricine PAGE and dynamic light scattering (DLS) analysis revealed that the addition of QCCs did not induce higher multimeric forms of either proteins at all QCC:EmrE molar ratios examined under the solubilization conditions applied. QCC binding curves obtained from the Trp fluorescence quenching spectra, gave the values of dissociation constant (Kd) and maximum specific one-site binding (Bmax). Lower Bmax values to QCCs for his6-tagged EmrE shows that the binding sites remained unoccupied. This lower saturation suggests that the his6-tagged versions provide a conformation that prevents saturated binding. Our data demonstrate that tagging an integral membrane protein can significantly influence the protein.

Keywords: small multidrug resistance (SMR) protein, EmrE, integral membrane protein folding, quaternary ammonium compounds (QAC), quaternary cation compounds (QCC), nickel affinity chromatography, hexahistidine (His6) tag

Procedia PDF Downloads 349
1625 Application of Groundwater Level Data Mining in Aquifer Identification

Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen

Abstract:

Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.

Keywords: aquifer identification, decision tree, groundwater, Fourier transform

Procedia PDF Downloads 129
1624 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals

Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya

Abstract:

The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.

Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses

Procedia PDF Downloads 293
1623 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform

Authors: Khadija Refouh

Abstract:

Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.

Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms

Procedia PDF Downloads 107
1622 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method

Authors: Defne Uz

Abstract:

Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.

Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration

Procedia PDF Downloads 112
1621 Comparison of Patient Satisfaction and Observer Rating of Outpatient Care among Public Hospitals in Shanghai

Authors: Tian Yi Du, Guan Rong Fan, Dong Dong Zou, Di Xue

Abstract:

Background: The patient satisfaction survey is becoming of increasing importance for hospitals or other providers to get more reimbursement and/or more governmental subsidies. However, when the results of patient satisfaction survey are compared among medical institutions, there are some concerns. The primary objectives of this study were to evaluate patient satisfaction in tertiary hospitals of Shanghai and to compare the satisfaction rating on physician services between patients and observers. Methods: Two hundred outpatients were randomly selected for patient satisfaction survey in each of 28 public tertiary hospitals of Shanghai. Four or five volunteers were selected to observe 5 physicians’ practice in each of above hospitals and rated observed physicians’ practice. The outpatients that the volunteers observed their physician practice also filled in the satisfaction questionnaires. The rating scale for outpatient survey and volunteers’ observation was: 1 (very dissatisfied) to 6 (very satisfied). If the rating was equal to or greater than 5, we considered the outpatients and volunteers were satisfied with the services. The validity and reliability of the measure were assessed. Multivariate regressions for each of the 4 dimensions and overall of patient satisfaction were used in analyses. Paired t tests were applied to analyze the rating agreement on physician services between outpatients and volunteers. Results: Overall, 90% of surveyed outpatients were satisfied with outpatient care in the tertiary public hospitals of Shanghai. The lowest three satisfaction rates were seen in the items of ‘Restrooms were sanitary and not crowded’ (81%), ‘It was convenient for the patient to pay medical bills’ (82%), and ‘Medical cost in the hospital was reasonable’ (84%). After adjusting the characteristics of patients, the patient satisfaction in general hospitals was higher than that in specialty hospitals. In addition, after controlling the patient characteristics and number of hospital visits, the hospitals with higher outpatient cost per visit had lower patient satisfaction. Paired t tests showed that the rating on 6 items in the dimension of physician services (total 14 items) was significantly different between outpatients and observers, in which 5 were rated lower by the observers than by the outpatients. Conclusions: The hospital managers and physicians should use patient satisfaction and observers’ evaluation to detect the room for improvement in areas such as social skills cost control, and medical ethics.

Keywords: patient satisfaction, observation, quality, hospital

Procedia PDF Downloads 296
1620 The Basin Management Methodology for Integrated Water Resources Management and Development

Authors: Julio Jesus Salazar, Max Jesus De Lama

Abstract:

The challenges of water management are aggravated by global change, which implies high complexity and associated uncertainty; water management is difficult because water networks cross domains (natural, societal, and political), scales (space, time, jurisdictional, institutional, knowledge, etc.) and levels (area: patches to global; knowledge: a specific case to generalized principles). In this context, we need to apply natural and non-natural measures to manage water and soil. The Basin Management Methodology considers multifunctional measures of natural water retention and erosion control and soil formation to protect water resources and address the challenges related to the recovery or conservation of the ecosystem, as well as natural characteristics of water bodies, to improve the quantitative status of water bodies and reduce vulnerability to floods and droughts. This method of water management focuses on the positive impacts of the chemical and ecological status of water bodies, restoration of the functioning of the ecosystem and its natural services; thus, contributing to both adaptation and mitigation of climate change. This methodology was applied in 7 interventions in the sub-basin of the Shullcas River in Huancayo-Junín-Peru, obtaining great benefits in the framework of the participation of alliances of actors and integrated planning scenarios. To implement the methodology in the sub-basin of the Shullcas River, a process called Climate Smart Territories (CST) was used; with which the variables were characterized in a highly complex space. The diagnosis was then worked using risk management and adaptation to climate change. Finally, it was concluded with the selection of alternatives and projects of this type. Therefore, the CST approach and process face the challenges of climate change through integrated, systematic, interdisciplinary and collective responses at different scales that fit the needs of ecosystems and their services that are vital to human well-being. This methodology is now replicated at the level of the Mantaro river basin, improving with other initiatives that lead to the model of a resilient basin.

Keywords: climate-smart territories, climate change, ecosystem services, natural measures, Climate Smart Territories (CST) approach

Procedia PDF Downloads 116
1619 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 95
1618 Heteroatom Doped Binary Metal Oxide Modified Carbon as a Bifunctional Electrocatalysts for all Vanadium Redox Flow Battery

Authors: Anteneh Wodaje Bayeh, Daniel Manaye Kabtamu, Chen-Hao Wang

Abstract:

As one of the most promising electrochemical energy storage systems, vanadium redox flow batteries (VRFBs) have received increasing attention owing to their attractive features for largescale storage applications. However, their high production cost and relatively low energy efficiency still limit their feasibility. For practical implementation, it is of great interest to improve their efficiency and reduce their cost. One of the key components of VRFBs that can greatly influence the efficiency and final cost is the electrode, which provide the reactions sites for redox couples (VO²⁺/VO₂ + and V²⁺/V³⁺). Carbon-based materials are considered to be the most feasible electrode materials in the VRFB because of their excellent potential in terms of operation range, good permeability, large surface area, and reasonable cost. However, owing to limited electrochemical activity and reversibility and poor wettability due to its hydrophobic properties, the performance of the cell employing carbon-based electrodes remained limited. To address the challenges, we synthesized heteroatom-doped bimetallic oxide grown on the surface of carbon through the one-step approach. When applied to VRFBs, the prepared electrode exhibits significant electrocatalytic effect toward the VO²⁺/VO₂ + and V³⁺/V²⁺ redox reaction compared with that of pristine carbon. It is found that the presence of heteroatom on metal oxide promotes the absorption of vanadium ions. The controlled morphology of bimetallic metal oxide also exposes more active sites for the redox reaction of vanadium ions. Hence, the prepared electrode displays the best electrochemical performance with energy and voltage efficiencies of 74.8% and 78.9%, respectively, which is much higher than those of 59.8% and 63.2% obtained from the pristine carbon at high current density. Moreover, the electrode exhibit durability and stability in an acidic electrolyte during long-term operation for 1000 cycles at the higher current density.

Keywords: VRFB, VO²⁺/VO₂ + and V³⁺/V²⁺ redox couples, graphite felt, heteroatom-doping

Procedia PDF Downloads 62
1617 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 435
1616 Numerical Model of Low Cost Rubber Isolators for Masonry Housing in High Seismic Regions

Authors: Ahmad B. Habieb, Gabriele Milani, Tavio Tavio, Federico Milani

Abstract:

Housings in developing countries have often inadequate seismic protection, particularly for masonry. People choose this type of structure since the cost and application are relatively cheap. Seismic protection of masonry remains an interesting issue among researchers. In this study, we develop a low-cost seismic isolation system for masonry using fiber reinforced elastomeric isolators. The elastomer proposed consists of few layers of rubber pads and fiber lamina, making it lower in cost comparing to the conventional isolators. We present a finite element (FE) analysis to predict the behavior of the low cost rubber isolators undergoing moderate deformations. The FE model of the elastomer involves a hyperelastic material property for the rubber pad. We adopt a Yeoh hyperelasticity model and estimate its coefficients through the available experimental data. Having the shear behavior of the elastomers, we apply that isolation system onto small masonry housing. To attach the isolators on the building, we model the shear behavior of the isolation system by means of a damped nonlinear spring model. By this attempt, the FE analysis becomes computationally inexpensive. Several ground motion data are applied to observe its sensitivity. Roof acceleration and tensile damage of walls become the parameters to evaluate the performance of the isolators. In this study, a concrete damage plasticity model is used to model masonry in the nonlinear range. This tool is available in the standard package of Abaqus FE software. Finally, the results show that the low-cost isolators proposed are capable of reducing roof acceleration and damage level of masonry housing. Through this study, we are also capable of monitoring the shear deformation of isolators during seismic motion. It is useful to determine whether the isolator is applicable. According to the results, the deformations of isolators on the benchmark one story building are relatively small.

Keywords: masonry, low cost elastomeric isolator, finite element analysis, hyperelasticity, damped non-linear spring, concrete damage plasticity

Procedia PDF Downloads 254
1615 Modeling Karachi Dengue Outbreak and Exploration of Climate Structure

Authors: Syed Afrozuddin Ahmed, Junaid Saghir Siddiqi, Sabah Quaiser

Abstract:

Various studies have reported that global warming causes unstable climate and many serious impact to physical environment and public health. The increasing incidence of dengue incidence is now a priority health issue and become a health burden of Pakistan. In this study it has been investigated that spatial pattern of environment causes the emergence or increasing rate of dengue fever incidence that effects the population and its health. The climatic or environmental structure data and the Dengue Fever (DF) data was processed by coding, editing, tabulating, recoding, restructuring in terms of re-tabulating was carried out, and finally applying different statistical methods, techniques, and procedures for the evaluation. Five climatic variables which we have studied are precipitation (P), Maximum temperature (Mx), Minimum temperature (Mn), Humidity (H) and Wind speed (W) collected from 1980-2012. The dengue cases in Karachi from 2010 to 2012 are reported on weekly basis. Principal component analysis is applied to explore the climatic variables and/or the climatic (structure) which may influence in the increase or decrease in the number of dengue fever cases in Karachi. PC1 for all the period is General atmospheric condition. PC2 for dengue period is contrast between precipitation and wind speed. PC3 is the weighted difference between maximum temperature and wind speed. PC4 for dengue period contrast between maximum and wind speed. Negative binomial and Poisson regression model are used to correlate the dengue fever incidence to climatic variable and principal component score. Relative humidity is estimated to positively influence on the chances of dengue occurrence by 1.71% times. Maximum temperature positively influence on the chances dengue occurrence by 19.48% times. Minimum temperature affects positively on the chances of dengue occurrence by 11.51% times. Wind speed is effecting negatively on the weekly occurrence of dengue fever by 7.41% times.

Keywords: principal component analysis, dengue fever, negative binomial regression model, poisson regression model

Procedia PDF Downloads 407
1614 Paradox of Growing Adaptive Capacities for Sustainability Transformation in Urban Water Management in Bangladesh

Authors: T. Yasmin, M. A. Farrelly, B. C. Rogers

Abstract:

Urban water governance in developing countries faces numerous challenges arising from uncontrolled urban population expansion, water pollution, greater economic push and more recently, climate change impact while undergoing transitioning towards a sustainable system. Sustainability transition requires developing adaptive capacities of the socio-ecological and socio-technical system to be able to deal with complexity. Adaptive capacities deliver strategies to connect individuals, organizations, agencies and institutions at multiple levels for dealing with such complexity. Understanding the level of adaptive capacities for sustainability transformation thus has gained significant research attention within developed countries, much less so in developing countries. Filling this gap, this article develops a conceptual framework for analysing the level of adaptive capacities (if any) within a developing context. This framework then applied to the chronological development of urban water governance strategies in Bangladesh for almost two centuries. The chronological analysis of governance interventions has revealed that crisis (public health, food and natural hazards) became the opportunities and thus opened the windows for experimentation and learning to occur as a deviation from traditional practices. Self-organization and networks thus created the platform for development or disruptions to occur for creating change. Leadership (internal or external) is important for nurturing and upscaling theses development or disruptions towards guiding policy vision and targets as well as championing ground implementation. In the case of Bangladesh, the leadership from the international and national aid organizations and targets have always lead the development whereas more often social capital tools (trust, power relations, cultural norms) act as disruptions. Historically, this has been evident in the development pathways of urban water governance in Bangladesh. Overall this research has shown some level of adaptive capacities is growing for sustainable urban growth in big cities, nevertheless unclear regarding the growth in medium and small cities context.

Keywords: adaptive capacity, Bangladesh, sustainability transformation, water governance

Procedia PDF Downloads 363
1613 The Effect of Transactional Analysis Group Training on Self-Knowledge and Its Ego States (The Child, Parent, and Adult): A Quasi-Experimental Study Applied to Counselors of Tehran

Authors: Mehravar Javid, Sadrieh Khajavi Mazanderani, Kelly Gleischman, Zoe Andris

Abstract:

The present study was conducted with the aim of investigating the effectiveness of transactional analysis group training on self-knowledge and Its dimensions (self, child, and adult) in counselors working in public and private high schools in Tehran. Counseling has become an important job for society, and there is a need for consultants in organizations. Providing better and more efficient counseling is one of the goals of the education system. The personal characteristics of counselors are important for the success of the therapy. In TA, humans have three ego states, which are named parent, adult, and child, and the main concept in the transactional analysis is self-state, which means a stable feeling and pattern of thinking related to behavioral patterns. Self-knowledge, considered a prerequisite to effective communication, fosters psychological growth, and recognizing it, is pivotal for emotional development, leading to profound insights. The research sample included 30 working counselors (22 women and 8 men) in the academic year 2019-2020 who achieved the lowest scores on the self-knowledge questionnaire. The research method was quasi-experimental with a control group (15 people in the experimental group and 15 people in the control group). The research tool was a self-awareness questionnaire with 29 questions and three subscales (child, parent, and adult Ego state). The experimental group was exposed to transactional analysis training for 10 once-weekly 2-hour sessions; the questionnaire was implemented in both groups (post-test). Multivariate covariance analysis was used to analyze the data. The data showed that the level of self-awareness of counselors who received transactional analysis training is higher than that of counselors who did not receive any training (p<0.01). The result obtained from this analysis shows that transactional analysis training is an effective therapy for enhancing self-knowledge and its subscales (Adult ego state, Parent ego state, and Child ego state). Teaching transactional analysis increases self-knowledge, and self-realization and helps people to achieve independence and remove irresponsibility to improve intra-personal and interpersonal relationships.

Keywords: ego state, group, transactional analysis, self-knowledge

Procedia PDF Downloads 44
1612 Understanding Ambivalent Behaviors of Social Media Users toward the 'Like' Function: A Social Capital Perspective

Authors: Jung Lee, L. G. Pee

Abstract:

The 'Like' function in social media platforms represents the immediate responses of social media users to postings and other users. A large number of 'likes' is often attributed to fame, agreement, and support from others that many users are proud of and happy with. However, what 'like' implies exactly in social media context is still in discussion. Some argue that it is an accurate parameter of the preferences of social media users, whereas others refute that it is merely an instant reaction that is volatile and vague. To address this gap, this study investigates how social media users perceive the 'like' function and behave differently based on their perceptions. This study posits the following arguments. First, 'like' is interpreted as a quantified form of social capital that resides in social media platforms. This incarnated social capital rationalizes the attraction of people to social media and belief that social media platforms bring benefits to their relationships with others. This social capital is then conceptualized into cognitive and emotive dimensions, where social capital in the cognitive dimension represents the awareness of the 'likes' quantitatively, whereas social capital in the emotive dimension represents the receptions of the 'likes' qualitatively. Finally, the ambivalent perspective of the social media users on 'like' (i.e., social capital) is applied. This view rationalizes why social media users appreciate the reception of 'likes' from others but are aware that those 'likes' can distort the actual responses of other users by sending erroneous signals. The rationale on this ambivalence is based on whether users perceive social media as private or public spheres. When social media is more publicized, the ambivalence is more strongly observed. By combining the ambivalence and dimensionalities of the social capital, four types of social media users with different mechanisms on liking behaviors are identified. To validate this work, a survey with 300 social media users is conducted. The analysis results support most of the hypotheses and confirm that people have ambivalent perceptions on 'like' as a social capital and that perceptions influence behavioral patterns. The implication of the study is clear. First, this study explains why social media users exhibit different behaviors toward 'likes' in social media. Although most of the people believe that the number of 'likes' is the simplest and most frank measure of supports from other social media users, this study introduces the users who do not trust the 'likes' as a stable and reliable parameter of social media. In addition, this study links the concept of social media openness to explain the different behaviors of social media users. Social media openness has theoretical significance because it defines the psychological boundaries of social media from the perspective of users.

Keywords: ambivalent attitude, like function, social capital, social media

Procedia PDF Downloads 200
1611 Matrix-Based Linear Analysis of Switched Reluctance Generator with Optimum Pole Angles Determination

Authors: Walid A. M. Ghoneim, Hamdy A. Ashour, Asmaa E. Abdo

Abstract:

In this paper, linear analysis of a Switched Reluctance Generator (SRG) model is applied on the most common configurations (4/2, 6/4 and 8/6) for both conventional short-pitched and fully-pitched designs, in order to determine the optimum stator/rotor pole angles at which the maximum output voltage is generated per unit excitation current. This study is focused on SRG analysis and design as a proposed solution for renewable energy applications, such as wind energy conversion systems. The world’s potential to develop the renewable energy technologies through dedicated scientific researches was the motive behind this study due to its positive impact on economy and environment. In addition, the problem of rare earth metals (Permanent magnet) caused by mining limitations, banned export by top producers and environment restrictions leads to the unavailability of materials used for rotating machines manufacturing. This challenge gave authors the opportunity to study, analyze and determine the optimum design of the SRG that has the benefit to be free from permanent magnets, rotor windings, with flexible control system and compatible with any application that requires variable-speed operation. In addition, SRG has been proved to be very efficient and reliable in both low-speed or high-speed applications. Linear analysis was performed using MATLAB simulations based on the (Modified generalized matrix approach) of Switched Reluctance Machine (SRM). About 90 different pole angles combinations and excitation patterns were simulated through this study, and the optimum output results for each case were recorded and presented in detail. This procedure has been proved to be applicable for any SRG configuration, dimension and excitation pattern. The delivered results of this study provide evidence for using the 4-phase 8/6 fully pitched SRG as the main optimum configuration for the same machine dimensions at the same angular speed.

Keywords: generalized matrix approach, linear analysis, renewable applications, switched reluctance generator

Procedia PDF Downloads 163