Search results for: heat exchange coefficient
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6329

Search results for: heat exchange coefficient

599 Development of Taiwanese Sign Language Receptive Skills Test for Deaf Children

Authors: Hsiu Tan Liu, Chun Jung Liu

Abstract:

It has multiple purposes to develop a sign language receptive skills test. For example, this test can be used to be an important tool for education and to understand the sign language ability of deaf children. There is no available test for these purposes in Taiwan. Through the discussion of experts and the references of standardized Taiwanese Sign Language Receptive Test for adults and adolescents, the frame of Taiwanese Sign Language Receptive Skills Test (TSL-RST) for deaf children was developed, and the items were further designed. After multiple times of pre-trials, discussions and corrections, TSL-RST is finally developed which can be conducted and scored online. There were 33 deaf children who agreed to be tested from all three deaf schools in Taiwan. Through item analysis, the items were picked out that have good discrimination index and fair difficulty index. Moreover, psychometric indexes of reliability and validity were established. Then, derived the regression formula was derived which can predict the sign language receptive skills of deaf children. The main results of this study are as follows. (1). TSL-RST includes three sub-test of vocabulary comprehension, syntax comprehension and paragraph comprehension. There are 21, 20, and 9 items in vocabulary comprehension, syntax comprehension, and paragraph comprehension, respectively. (2). TSL-RST can be conducted individually online. The sign language ability of deaf students can be calculated fast and objectively, so that they can get the feedback and results immediately. This can also contribute to both teaching and research. The most subjects can complete the test within 25 minutes. While the test procedure, they can answer the test questions without relying on their reading ability or memory capacity. (3). The sub-test of the vocabulary comprehension is the easiest one, syntax comprehension is harder than vocabulary comprehension and the paragraph comprehension is the hardest. Each of the three sub-test and the whole test are good in item discrimination index. (4). The psychometric indices are good, including the internal consistency reliability (Cronbach’s α coefficient), test-retest reliability, split-half reliability, and content validity. The sign language ability are significantly related to non-verbal IQ, the teachers’ rating to the students’ sign language ability and students’ self-rating to their own sign language ability. The results showed that the higher grade students have better performance than the lower grade students, and students with deaf parent perform better than those with hearing parent. These results made TLS-RST have great discriminant validity. (5). The predictors of sign language ability of primary deaf students are age and years of starting to learn sign language. The results of this study suggested that TSL-RST can effectively assess deaf student’s sign language ability. This study also proposed a model to develop a sign language tests.

Keywords: comprehension test, elementary school, sign language, Taiwan sign language

Procedia PDF Downloads 183
598 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance

Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian

Abstract:

Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR has been found to be a promising option for directly producing steam to a thermal cycle in order to generate low-cost electricity, but also it has been shown to be promising for indirect steam generation. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the direct steam generation of the linear Fresnel reflector. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.

Keywords: concentrated solar power, levelized cost of electricity, linear Fresnel reflectors, steam generation

Procedia PDF Downloads 106
597 Characterization of a Three-Electrodes Bioelectrochemical System from Mangrove Water and Sediments for the Reduction of Chlordecone in Martinique

Authors: Malory Jonata

Abstract:

Chlordecone (CLD) is an organochlorine pesticide used between 1971 and 1993 in both Guadeloupe and Martinique for the control of banana black weevil. The bishomocubane structure which characterizes this chemical compound led to high stability in organic matter and high persistence in the environment. Recently, researchers found that CLD can be degraded by isolated bacteria consortiums and, particularly, by bacteria such as Citrobacter sp 86 and Delsulfovibrio sp 86. Actually, six transformation product families of CLD are known. Moreover, the latest discovery showed that CLD was disappearing faster than first predicted in highly contaminated soil in Guadeloupe. However, the toxicity of transformation products is still unknown, and knowledge has to be deepened on the degradation ways and chemical characteristics of chlordecone and its transformation products. Microbial fuel cells (MFC) are electrochemical systems that can convert organic matter into electricity thanks to electroactive bacteria. These bacteria can exchange electrons through their membranes to solid surfaces or molecules. MFC have proven their efficiency as bioremediation systems in water and soils. They are already used for the bioremediation of several organochlorine compounds such as perchlorate, trichlorophenol or hexachlorobenzene. In this study, a three-electrodes system, inspired by MFC, is used to try to degrade chlordecone using bacteria from a mangrove swamp in Martinique. As we know, some mangrove bacteria are electroactive. Furthermore, the CLD rate seems to decline in mangrove swamp sediments. This study aims to prove that electroactive bacteria from a mangrove swamp in Martinique can degrade CLD thanks to a three-electrodes bioelectrochemical system. To achieve this goal, the tree-electrodes assembly has been connected to a potentiostat. The substrate used is mangrove water and sediments sampled in the mangrove swamp of La Trinité, a coastal city in Martinique, where CLD contamination has already been studied. Electroactive biofilms are formed by imposing a potential relative to Saturated Calomel Electrode using chronoamperometry. Moreover, their comportment has been studied by using cyclic voltametry. Biofilms have been studied under different imposed potentials, several conditions of the substrate and with or without CLD. In order to quantify the evolution of CLD rates in the substrate’s system, gas chromatography coupled with mass spectrometry (GC-MS) was performed on pre-treated samples of water and sediments after short, medium and long-term contact with the electroactive biofilms. Results showed that between -0,8V and -0,2V, the three-electrodes system was able to reduce the chemical in the substrate solution. The first GC-MS analysis result of samples spiked with CLD seems to reveal decreased CLD concentration over time. In conclusion, the designed bioelectrochemical system can provide the necessary conditions for chlordecone degradation. However, it is necessary to improve three-electrodes control settings in order to increase degradation rates. The biological pathways are yet to enlighten by biologicals analysis of electroactive biofilms formed in this system. Moreover, the electrochemical study of mangrove substrate gives new informations on the potential use of this substrate for bioremediation. But further studies are needed to a better understanding of the electrochemical potential of this environment.

Keywords: bioelectrochemistry, bioremediation, chlordecone, mangrove swamp

Procedia PDF Downloads 71
596 Progressive Damage Analysis of Mechanically Connected Composites

Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan

Abstract:

While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values ​​, and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.

Keywords: puck, finite element, bolted joint, composite

Procedia PDF Downloads 97
595 Psychological Sense of School Membership and Coping Ability as Predictors of Multidimensional Life Satisfaction among School Children

Authors: Mary Banke Iyabo Omoniyi

Abstract:

Children in the developing countries have complex social, economic, political and environmental contexts that create a wide range of challenges for school children to surmount as they journey through school from childhood to adolescent. Many of these children have little or no personal resources and social support to confront these challenges. This study employed a descriptive research design of survey type to investigate the psychological sense of school membership and coping skills as they relate to the multidimensional life satisfaction of the school children. The sample consists of 835 school children with the age range of 7-11 years who were randomly selected from twenty schools in Ondo state, Nigeria. The instrument for data collection was a questionnaire consisting of 4 sections A, B, C and D. Section A contained items on the children’s bio-data (Age, School, father’s and mother’s educational qualifications), section B is the Multidimensional Children Life Satisfaction Questionnaire (MCLSQ) with a 20 item Likert type scale. The response format range from Never= 1 to Almost always =4. The (MCLSQ) was designed to provide profile of children satisfaction with important domains of (school, family and friends). Section C is the Psychological Sense of School Membership Questionnaire (PSSMQ) with 18 items having response format ranging from Not at true=1 to completely true=5. While section D is the Self-Report Coping Questionnaire (SRCQ) which has 16 items with response ranging from Never =1 to Always=5. The instrument has a test-retest reliability coefficient of r = 0.87 while the sectional reliabilities for MCLSQ, PSSMQ and SRCQ are 0.86, 0.92 and 0.89 respectively. The results indicated that self-report coping skill was significantly correlated with multidimensional life satisfaction (r=592;p<0.05). However, the correlation between multidimensional life satisfaction and psychological sense of school membership was not significant (r=0.038;p>0.05). The regression analysis indicated that the contribution of mother’s education and father’s education to psychological sense of school member of the children were 0.923, Adjusted R2 is 0.440 and 0.730 and Adjusted R2 is 0.446. The results also indicate that contribution of gender to psychological sense of school for male and female has R= 0.782, Adjusted R2 = 0.478 and R = 0.998, Adjusted R2 i= 0.932 respectively. In conclusion, mother’s education qualification was found to contribute more to children psychological sense of membership and multidimensional life satisfaction than father’s. The girl child was also found to have more sense of belonging to the school setting than boy child. The counselling implications and recommendations among others were geared towards positive emotional gender sensitivity with regards to the male folk. Education stakeholders are also encouraged to make the school environment more conducive and gender friendly.

Keywords: multidimensional life satisfaction, psychological sense of school, coping skills, counselling implications

Procedia PDF Downloads 303
594 Monte Carlo Simulation Study on Improving the Flatting Filter-Free Radiotherapy Beam Quality Using Filters from Low- z Material

Authors: H. M. Alfrihidi, H.A. Albarakaty

Abstract:

Flattening filter-free (FFF) photon beam radiotherapy has increased in the last decade, which is enabled by advancements in treatment planning systems and radiation delivery techniques like multi-leave collimators. FFF beams have higher dose rates, which reduces treatment time. On the other hand, FFF beams have a higher surface dose, which is due to the loss of beam hardening effect caused by the presence of the flatting filter (FF). The possibility of improving FFF beam quality using filters from low-z materials such as steel and aluminium (Al) was investigated using Monte Carlo (MC) simulations. The attenuation coefficient of low-z materials for low-energy photons is higher than that of high-energy photons, which leads to the hardening of the FFF beam and, consequently, a reduction in the surface dose. BEAMnrc user code, based on Electron Gamma Shower (EGSnrc) MC code, is used to simulate the beam of a 6 MV True-Beam linac. A phase-space (phosphor) file provided by Varian Medical Systems was used as a radiation source in the simulation. This phosphor file was scored just above the jaws at 27.88 cm from the target. The linac from the jaw downward was constructed, and radiation passing was simulated and scored at 100 cm from the target. To study the effect of low-z filters, steel and Al filters with a thickness of 1 cm were added below the jaws, and the phosphor file was scored at 100 cm from the target. For comparison, the FF beam was simulated using a similar setup. (BEAM Data Processor (BEAMdp) is used to analyse the energy spectrum in the phosphorus files. Then, the dose distribution resulting from these beams was simulated in a homogeneous water phantom using DOSXYZnrc. The dose profile was evaluated according to the surface dose, the lateral dose distribution, and the percentage depth dose (PDD). The energy spectra of the beams show that the FFF beam is softer than the FF beam. The energy peaks for the FFF and FF beams are 0.525 MeV and 1.52 MeV, respectively. The FFF beam's energy peak becomes 1.1 MeV using a steel filter, while the Al filter does not affect the peak position. Steel and Al's filters reduced the surface dose by 5% and 1.7%, respectively. The dose at a depth of 10 cm (D10) rises by around 2% and 0.5% due to using a steel and Al filter, respectively. On the other hand, steel and Al filters reduce the dose rate of the FFF beam by 34% and 14%, respectively. However, their effect on the dose rate is less than that of the tungsten FF, which reduces the dose rate by about 60%. In conclusion, filters from low-z material decrease the surface dose and increase the D10 dose, allowing for a high-dose delivery to deep tumors with a low skin dose. Although using these filters affects the dose rate, this effect is much lower than the effect of the FF.

Keywords: flattening filter free, monte carlo, radiotherapy, surface dose

Procedia PDF Downloads 67
593 Altruistic and Hedonic Motivations to Write eWOM Reviews on Hotel Experience

Authors: Miguel Llorens-Marin, Adolfo Hernandez, Maria Puelles-Gallo

Abstract:

The increasing influence of Online Travel Agencies (OTAs) on hotel bookings and the electronic word-of-mouth (eWOM) contained in them has been featured by many scientific studies as a major factor in the booking decision. The main reason is that nowadays, in the hotel sector, consumers first come into contact with the offer through the web and the online environment. Due to the nature of the hotel product and the fact that it is booked in advance to actually seeing it, there is a lack of knowledge about its actual features. This makes eWOM a major channel to help consumers to reduce their perception of risk when making their booking decisions. This research studies the relationship between aspects of customer influenceability by reading eWOM communications, at the time of booking a hotel, with the propensity to write a review. In other words, to test relationships between the reading and the writing of eWOM. Also investigates the importance of different underlying motivations for writing eWOM. Online surveys were used to obtain the data from a sample of hotel customers, with 739 valid questionnaires. A measurement model and Path analysis were carried out to analyze the chain of relationships among the independent variable (influenceability from reading reviews) and the dependent variable (propensity to write a review) with the mediating effects of additional variables, which help to explain the relationship. The authors also tested the moderating effects of age and gender in the model. The study considered three different underlying motivations for writing a review on a hotel experience, namely hedonic, altruistic and conflicted. Results indicate that the level of influenceability by reading reviews has a positive effect on the propensity to write reviews; therefore, we manage to link the reading and the writing of reviews. Authors also discover that the main underlying motivation to write a hotel review is the altruistic motivation, being the one with the higher Standard regression coefficient above the hedonic motivation. The authors suggest that the propensity to write reviews is not related to sociodemographic factors (age and gender) but to attitudinal factors such as ‘the most influential factor when reading’ and ‘underlying motivations to write. This gives light on the customer engagement motivations to write reviews. The implications are that managers should encourage their customers to write eWOM reviews on altruistic grounds to help other customers to make a decision. The most important contribution of this work is to link the effect of reading hotel reviews with the propensity to write reviews.

Keywords: hotel reviews, electronic word-of-mouth (eWOM), online consumer reviews, digital marketing, social media

Procedia PDF Downloads 93
592 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals

Authors: Carlos Teodoro, Oscar Bautista

Abstract:

In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.

Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation

Procedia PDF Downloads 211
591 Regional Rates of Sand Supply to the New South Wales Coast: Southeastern Australia

Authors: Marta Ribo, Ian D. Goodwin, Thomas Mortlock, Phil O’Brien

Abstract:

Coastal behavior is best investigated using a sediment budget approach, based on the identification of sediment sources and sinks. Grain size distribution over the New South Wales (NSW) continental shelf has been widely characterized since the 1970’s. Coarser sediment has generally accumulated on the outer shelf, and/or nearshore zones, with the latter related to the presence of nearshore reef and bedrocks. The central part of the NSW shelf is characterized by the presence of fine sediments distributed parallel to the coastline. This study presents new grain size distribution maps along the NSW continental shelf, built using all available NSW and Commonwealth Government holdings. All available seabed bathymetric data form prior projects, single and multibeam sonar, and aerial LiDAR surveys were integrated into a single bathymetric surface for the NSW continental shelf. Grain size information was extracted from the sediment sample data collected in more than 30 studies. The information extracted from the sediment collections varied between reports. Thus, given the inconsistency of the grain size data, a common grain size classification was her defined using the phi scale. The new sediment distribution maps produced, together with new detailed seabed bathymetric data enabled us to revise the delineation of sediment compartments to more accurately reflect the true nature of sediment movement on the inner shelf and nearshore. Accordingly, nine primary mega coastal compartments were delineated along the NSW coast and shelf. The sediment compartments are bounded by prominent nearshore headlands and reefs, and major river and estuarine inlets that act as sediment sources and/or sinks. The new sediment grain size distribution was used as an input in the morphological modelling to quantify the sediment transport patterns (and indicative rates of transport), used to investigate sand supply rates and processes from the lower shoreface to the NSW coast. The rate of sand supply to the NSW coast from deep water is a major uncertainty in projecting future coastal response to sea-level rise. Offshore transport of sand is generally expected as beaches respond to rising sea levels but an onshore supply from the lower shoreface has the potential to offset some of the impacts of sea-level rise, such as coastline recession. Sediment exchange between the lower shoreface and sub-aerial beach has been modelled across the south, central, mid-north and far-north coast of NSW. Our model approach is that high-energy storm events are the primary agents of sand transport in deep water, while non-storm conditions are responsible for re-distributing sand within the beach and surf zone.

Keywords: New South Wales coast, off-shore transport, sand supply, sediment distribution maps

Procedia PDF Downloads 223
590 Affects Associations Analysis in Emergency Situations

Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko

Abstract:

Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.

Keywords: data mining, emergency phone calls, emotional profiles, rules

Procedia PDF Downloads 403
589 The Link Between Collaboration Interactions and Team Creativity Among Nursing Student Teams in Taiwan: A Moderated Mediation Model

Authors: Hsing Yuan Liu

Abstract:

Background: Considerable theoretical and empirical work has identified a relationship between collaboration interactions and creativity in an organizational context. The mechanisms underlying this link, however, are not well understood in healthcare education. Objectives: The aims of this study were to explore the impact of collaboration interactions on team creativity and its underlying mechanism and to verify a moderated mediation model. Design, setting, and participants: This study utilized a cross-sectional, quantitative, descriptive design. The survey data were collected from 177 nursing students who enrolled in 18-week capstone courses of small interdisciplinary groups collaborating to design healthcare products in Taiwan during 2018 and 2019. Methods: Questionnaires assessed the nursing students' perceptions about their teams' swift trust (of cognition- and affect-based), conflicts (of task, process, and relationship), interaction behaviors (constructive controversy, helping behaviors, and spontaneous communication), and creativity. This study used descriptive statistics to compare demographics, swift trust scores, conflict scores, interaction behavior scores, and creativity scores for interdisciplinary teams. Data were analyzed using Pearson’s correlation coefficient and simple and hierarchical multiple regression models. Results: Pearson’s correlation analysis showed the cognition-based team swift trust was positively correlated with team creativity. The mediation model indicated constructive controversy fully mediated the effect of cognition-based team swift trust on student teams’ creativity. The moderated mediation model indicated that task conflict negatively moderates the mediating effect of the constructive controversy on the link between cognition-based team swift trust and team creativity. Conclusion: Our findings suggest nursing student teams’ interaction behaviors and task conflict are crucial mediating and moderated mediation variables on the relationship between collaboration interactions and team creativity, respectively. The empirical data confirms the validity of our proposed moderated mediation models of team creativity. Therefore, this study's validated moderated mediation model could provide guidance for nursing educators to improve collaboration interaction outcomes and creativity on nursing student teams.

Keywords: team swift trust, team conflict, team interaction behavior, moderated mediating effects, interdisciplinary education, nursing students

Procedia PDF Downloads 181
588 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator

Procedia PDF Downloads 71
587 Recommendations for Environmental Impact Assessment of Geothermal Projects on Mature Oil Fields

Authors: Daria Karasalihovic Sedlar, Lucija Jukic, Ivan Smajla, Marija Macenic

Abstract:

This paper analyses possible geothermal energy production from a mature oil reservoir based on exploitation of underlying aquifer thermal energy for the purpose of heating public buildings. Research was conducted based on the case study of the City of Ivanic-Grad public buildings energy demand and Ivanic oil filed that is situated in the same area. Since the City of Ivanic is one of the few cities in the EU where hydrocarbon exploitation has been taking place for decades almost entirely in urban area, decommissioning of oil wells is inevitable; therefore, the research goal was to investigate how to extend the life-time of the reservoir by exploiting geothermal brine beneath the oil reservoir in an environmental friendly manner. This kind of a project is extremely complex in all segments, from documentation preparation, implementation of technological solutions, and providing ecological measures for environmentally acceptable geothermal energy production and utilization. New mining activities that will be needed for the development of geothermal project at the observed Hydrocarbon Exploitation Field Ivanic will be carried out in order to prepare wells for increasing geothermal brine production. These operations involve the conversion of existing wells (well completion for conversion of the observation wells to production ones) along with workover activities, installation of new heat exchangers, and pipelines. Since the wells are in the urban area of the City of Ivanic-Grad in high density populated area, the inhabitants will be exposed to the different environmental impacts during preparation phase of the project. For the purpose of performing workovers, it will be necessary to secure access to wellheads of existing wells. This paper gives guidelines for describing potential impacts on environment components that could occur during geothermal production preparation on existing mature oil filed, recommends possible protection measures to mitigate these impacts, and gives recommendations for environmental monitoring.

Keywords: geothermal energy production, mature oil filed, environmental impact assessment, underlying aquifer thermal energy

Procedia PDF Downloads 142
586 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine

Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.

Keywords: diesel fuel, CFD, evaporation, multiphase

Procedia PDF Downloads 337
585 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 130
584 The Effects of Cooling during Baseball Games on Perceived Exertion and Core Temperature

Authors: Chih-Yang Liao

Abstract:

Baseball is usually played outdoors in the warmest months of the year. Therefore, baseball players are susceptible to the influence of the hot environment. It has been shown that hitting performance is increased in games played in warm weather, compared to in cold weather, in Major League Baseball. Intermittent cooling during sporting events can prevent the risk of hyperthermia and increase endurance performance. However, the effects of cooling during baseball games played in a hot environment are unclear. This study adopted a cross-over design. Ten Division I collegiate male baseball players in Taiwan volunteered to participate in this study. Each player played two simulated baseball games, with one day in between. Five of the players received intermittent cooling during the first simulated game, while the other five players received intermittent cooling during the second simulated game. The participants were covered in neck and forehand regions for 6 min with towels that were soaked in icy salt water 3 to 4 times during the games. The participants received the cooling treatment in the dugout when they were not on the field for defense or hitting. During the 2 simulated games, the temperature was 31.1-34.1°C and humidity was 58.2-61.8%, with no difference between the two games. Ratings of perceived exertion, thermal sensation, tympanic and forehead skin temperature immediately after each defensive half-inning and after cooling treatments were recorded. Ratings of perceived exertion were measured using the Borg 10-point scale. The thermal sensation was measured with a 6-point scale. The tympanic and skin temperature was measured with infrared thermometers. The data were analyzed with a two-way analysis of variance with repeated measurement. The results showed that intermitted cooling significantly reduced ratings of perceived exertion and thermal sensation. Forehead skin temperature was also significantly decreased after cooling treatments. However, the tympanic temperature was not significantly different between the two trials. In conclusion, intermittent cooling in the neck and forehead regions was effective in alleviating the perceived exertion and heat sensation. However, this cooling intervention did not affect the core temperature. Whether intermittent cooling has any impact on hitting or pitching performance in baseball players warrants further investigation.

Keywords: baseball, cooling, ratings of perceived exertion, thermal sensation

Procedia PDF Downloads 140
583 Experience of Two Major Research Centers in the Diagnosis of Cardiac Amyloidosis from Transthyretin

Authors: Ioannis Panagiotopoulos, Aristidis Anastasakis, Konstantinos Toutouzas, Ioannis Iakovou, Charalampos Vlachopoulos, Vasilis Voudris, Georgios Tziomalos, Konstantinos Tsioufis, Efstathios Kastritis, Alexandros Briassoulis, Kimon Stamatelopoulos, Alexios Antonopoulos, Paraskevi Exadaktylou, Evanthia Giannoula, Anastasia Katinioti, Maria Kalantzi, Evangelos Leontiadis, Eftychia Smparouni, Ioannis Malakos, Nikolaos Aravanis, Argyrios Doumas, Maria Koutelou

Abstract:

Introduction: Cardiac amyloidosis from Transthyretin (ATTR-CA) is an infiltrative disease characterized by the deposition of pathological transthyretin complexes in the myocardium. This study describes the characteristics of patients diagnosed with ATTR-CA from 2019 until present at the Nuclear Medicine Department of Onassis Cardiac Surgery Center and AHEPA Hospital. These centers have extensive experience in amyloidosis and modern technological equipment for its diagnosis. Materials and Methods: Records of consecutive patients (N=73) diagnosed with any type of amyloidosis were collected, analyzed, and prospectively followed. The diagnosis of amyloidosis was made using specific myocardial scintigraphy with Tc-99m DPD. Demographic characteristics, including age, gender, marital status, height, and weight, were collected in a database. Clinical characteristics, such as amyloidosis type (ATTR and AL), serum biomarkers (BNP, troponin), electrocardiographic findings, ultrasound findings, NYHA class, aortic valve replacement, device implants, and medication history, were also collected. Some of the most significant results are presented. Results: A total of 73 cases (86% male) were diagnosed with amyloidosis over four years. The mean age at diagnosis was 82 years, and the main symptom was dyspnea. Most patients suffered from ATTR-CA (65 vs. 8 with AL). Out of all the ATTR-CA patients, 61 were diagnosed with wild-type and 2 with two rare mutations. Twenty-eight patients had systemic amyloidosis with extracardiac involvement, and 32 patients had a history of bilateral carpal tunnel syndrome. Four patients had already developed polyneuropathy, and the diagnosis was confirmed by DPD scintigraphy, which is known for its high sensitivity. Among patients with isolated cardiac involvement, only 6 had left ventricular ejection fraction below 40%. The majority of ATTR patients underwent tafamidis treatment immediately after diagnosis. Conclusion: In conclusion, the experiences shared by the two centers and the continuous exchange of information provide valuable insights into the diagnosis and management of cardiac amyloidosis. Clinical suspicion of amyloidosis and early diagnostic approach are crucial, given the availability of non-invasive techniques. Cardiac scintigraphy with DPD can confirm the presence of the disease without the need for a biopsy. The ultimate goal still remains continuous education and awareness of clinical cardiologists so that this systemic and treatable disease can be diagnosed and certified promptly and treatment can begin as soon as possible.

Keywords: amyloidosis, diagnosis, myocardial scintigraphy, Tc-99m DPD, transthyretin

Procedia PDF Downloads 81
582 Rediscovering English for Academic Purposes in the Context of the UN’s Sustainable Developmental Goals

Authors: Sally Abu Sabaa, Lindsey Gutt

Abstract:

In an attempt to use education as a way of raising a socially responsible and engaged global citizen, the YU-Bridge program, the largest and fastest pathway program of its kind in North America, has embarked on the journey of integrating general themes from the UN’s sustainable developmental goals (SDGs) in its English for Academic Purposes (EAP) curriculum. The purpose of this initiative was to redefine the general philosophy of education in the middle of a pandemic and align with York University’s University Academic Plan that was released in summer 2020 framed around the SDGs. The YUB program attracts international students from all over the world but mainly from China, and its goal is to enable students to achieve the minimum language requirement to join their undergraduate courses at York University. However, along with measuring outcomes, objectives, and the students’ GPA, instructors and academics are always seeking innovation of the YUB curriculum to adapt to the ever growing challenges of academics in the university context, in order to focus more on subject matter that students will be exposed to in their undergraduate studies. However, with the sudden change that has happened globally with the advance of the COVID-19 pandemic, and other natural disasters like the increase in forest fires and floods, rethinking the philosophy and goal of education was a must. Accordingly, the SDGs became the solid pillars upon which we, academics and administrators of the program, could build a new curriculum and shift our perspective from simply ESL education to education with moral and ethical goals. The preliminary implementation of this initiative was supported by an institutional-wide consultation with EAP instructors who have diverse experiences, disciplines, and interests. Along with brainstorming sessions and mini-pilot projects preceding the integration of the SDGs in the YUB-EAP curriculum, those meetings led to creating a general outline of a curriculum and an assessment framework that has the SDGs at its core with the medium of ESL used for language instruction. Accordingly, a community of knowledge exchange was spontaneously created and facilitated by instructors. This has led to knowledge, resources, and teaching pedagogies being shared and examined further. In addition, experiences and reactions of students are being shared, leading to constructive discussions about opportunities and challenges with the integration of the SDGs. The discussions have branched out to discussions about cultural and political barriers along with a thirst for knowledge and engagement, which has resulted in increased engagement not only on the part of the students but the instructors as well. Later in the program, two surveys will be conducted: one for the students and one for the instructors to measure the level of engagement of each in this initiative as well as to elicit suggestions for further development. This paper will describe this fundamental step into using ESL methodology as a mode of disseminating essential ethical and socially correct knowledge for all learners in the 21st Century, the students’ reactions, and the teachers’ involvement and reflections.

Keywords: EAP, curriculum, education, global citizen

Procedia PDF Downloads 179
581 Two-Level Separation of High Air Conditioner Consumers and Demand Response Potential Estimation Based on Set Point Change

Authors: Mehdi Naserian, Mohammad Jooshaki, Mahmud Fotuhi-Firuzabad, Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee

Abstract:

In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a recent solution is presented to uncover consumers with high air conditioner demand among large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.

Keywords: communication infrastructure, smart meters, power systems, HVAC system, residential HVAC systems

Procedia PDF Downloads 58
580 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals

Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya

Abstract:

The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.

Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses

Procedia PDF Downloads 318
579 Study of the Uncertainty Behaviour for the Specific Total Enthalpy of the Hypersonic Plasma Wind Tunnel Scirocco at Italian Aerospace Research Center

Authors: Adolfo Martucci, Iulian Mihai

Abstract:

By means of the expansion through a Conical Nozzle and the low pressure inside the Test Chamber, a large hypersonic stable flow takes place for a duration of up to 30 minutes. Downstream the Test Chamber, the diffuser has the function of reducing the flow velocity to subsonic values, and as a consequence, the temperature increases again. In order to cool down the flow, a heat exchanger is present at the end of the diffuser. The Vacuum System generates the necessary vacuum conditions for the correct hypersonic flow generation, and the DeNOx system, which follows the Vacuum System, reduces the nitrogen oxide concentrations created inside the plasma flow behind the limits imposed by Italian law. This very large, powerful, and complex facility allows researchers and engineers to reproduce entire re-entry trajectories of space vehicles into the atmosphere. One of the most important parameters for a hypersonic flowfield representative of re-entry conditions is the specific total enthalpy. This is the whole energy content of the fluid, and it represents how severe could be the conditions around a spacecraft re-entering from a space mission or, in our case, inside a hypersonic wind tunnel. It is possible to reach very high values of enthalpy (up to 45 MJ/kg) that, together with the large allowable size of the models, represent huge possibilities for making on-ground experiments regarding the atmospheric re-entry field. The maximum nozzle exit section diameter is 1950 mm, where values of Mach number very much higher than 1 can be reached. The specific total enthalpy is evaluated by means of a number of measurements, each of them concurring with its value and its uncertainty. The scope of the present paper is the evaluation of the sensibility of the uncertainty of the specific total enthalpy versus all the parameters and measurements involved. The sensors that, if improved, could give the highest advantages have so been individuated. Several simulations in Python with the METAS library and by means of Monte Carlo simulations are presented together with the obtained results and discussions about them.

Keywords: hypersonic, uncertainty, enthalpy, simulations

Procedia PDF Downloads 88
578 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 214
577 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 280
576 Foreseen the Future: Human Factors Integration in European Horizon Projects

Authors: José Manuel Palma, Paula Pereira, Margarida Tomás

Abstract:

Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).

Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0

Procedia PDF Downloads 54
575 Emphasis on Difference: Ethnic and National Cultural Heritage Identities and Issues in East Asia Focusing on Korea Cases

Authors: Hyuk-Jin Lee

Abstract:

Even though 23 years have passed in the 21st century, nation-state and nationality-centered cultural identities are still the sentiments and ideologies that dominate the world. Nevertheless, as seen in many cases in Europe, a new perspective is needed to recognize mutual exchanges and influences and to view them as natural cultural exchanges between countries. The situation in East Asia is completely different from Europe. This is presumed to be from the long tradition of having an ethnocentric state concept for at least hundreds of years, quite different from Europe, where the concept of a nation-state was established relatively recently. In other words, unlike Europe, where active exchanges took place, the problem stems from the unique characteristics of East Asia, which has a strong tradition of finding its identity in 'difference'. Thus, it would not be hard to find cultural studies or news of the three East Asian countries emphasizing differences among one another. This applies to all cultural areas, including traditional architecture. For example, in the Korean traditional architecture field, buildings with effects from neighboring countries tend to be ignored, even if they are traditional Korean architecture. In addition to this, in the case of Korea, there seems to be one more cultural harmful aftereffect caused by the 36 years of Japanese colonial rule in the early 20th century; the obsessive filtering concept of 'it must be different from Japan'. In other words, the implicit ideological coercion that the definition of 'Korean cultural heritage' should not be influenced by exchanges with Japan may be found throughout Korean studies. The architectural and cultural aspects of the vast period of time, from the Three Kingdoms era to the beginning of Joseon, which was a period in which cultural influence exchanges with neighboring countries were relatively strong compared to the late Joseon Dynasty, also reflect the 'distorted filtering' caused by finding a repulsive identity against the Japanese colonial period. It is important to look the cultural heritage and traditions as they are inductively, not deductively. If not, we may often ignore or limit our own precious cultural heritage. Conversely, If Baekje, the ancient Korean Kingdom, helped Japan in construction and craftsmen played a big role in building the ancient temple, it would be a healthier perspective to view it as a cultural exchange rather than proudly seeing it as a cultural owner's perspective because this point of view is a proper reconstruction of our ancient and medieval Asian culture (strictly speaking, the color common to East Asia at the time). In particular, this study will examine this topic by giving specific examples from each field of Korean cultural studies. In the search for cultural identity, it would be more helpful for healthy relations between countries and collaborative research in the sensitive part of the interpretation of historical facts as well as cultural circles to minimize excessive meanings on originality and difference.

Keywords: cultural heritage identity, cultural ideology, East Asia, Korea

Procedia PDF Downloads 71
574 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students

Authors: Majed A. Alsalem

Abstract:

In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.

Keywords: deaf and hard of hearing, digital books, literacy, technology

Procedia PDF Downloads 483
573 Investigation of Nucleation and Thermal Conductivity of Waxy Crude Oil on Pipe Wall via Particle Dynamics

Authors: Jinchen Cao, Tiantian Du

Abstract:

As waxy crude oil is easy to crystallization and deposition in the pipeline wall, it causes pipeline clogging and leads to the reduction of oil and gas gathering and transmission efficiency. In this paper, a mesoscopic scale dissipative particle dynamics method is employed, and constructed four pipe wall models, including smooth wall (SW), hydroxylated wall (HW), rough wall (RW), and single-layer graphene wall (GW). Snapshots of the simulation output trajectories show that paraffin molecules interact with each other to form a network structure that constrains water molecules as their nucleation sites. Meanwhile, it is observed that the paraffin molecules on the near-wall side are adsorbed horizontally between inter-lattice gaps of the solid wall. In the pressure range of 0 - 50 MPa, the pressure change has less effect on the affinity properties of SS, HS, and GS walls, but for RS walls, the contact angle between paraffin wax and water molecules was found to decrease with the increase in pressure, while the water molecules showed the opposite trend, the phenomenon is due to the change in pressure, leading to the transition of paraffin wax molecules from amorphous to crystalline state. Meanwhile, the minimum crystalline phase pressure (MCPP) was proposed to describe the lowest pressure at which crystallization of paraffin molecules occurs. The maximum number of crystalline clusters formed by paraffin molecules at MCPP in the system showed NSS (0.52 MPa) > NHS (0.55 MPa) > NRS (0.62 MPa) > NGS (0.75 MPa). The MCPP on the graphene surface, with the least number of clusters formed, indicates that the addition of graphene inhibited the crystallization process of paraffin deposition on the wall surface. Finally, the thermal conductivity was calculated, and the results show that on the near-wall side, the thermal conductivity changes drastically due to the occurrence of adsorption crystallization of paraffin waxes; on the fluid side the thermal conductivity gradually tends to stabilize, and the average thermal conductivity shows: ĸRS(0.254W/(m·K)) > ĸRS(0.249W/(m·K)) > ĸRS(0.218W/(m·K)) > ĸRS(0.188W/(m·K)).This study provides a theoretical basis for improving the transport efficiency and heat transfer characteristics of waxy crude oil in terms of wall type, wall roughness, and MCPP.

Keywords: waxy crude oil, thermal conductivity, crystallization, dissipative particle dynamics, MCPP

Procedia PDF Downloads 69
572 Gassing Tendency of Natural Ester Based Transformer oils: Low Alkane Generation in Stray Gassing Behaviour

Authors: Thummalapalli CSM Gupta, Banti Sidhiwala

Abstract:

Mineral oils of naphthenic and paraffinic type have been traditionally been used as insulating liquids in the transformer applications to protect the solid insulation from moisture and ensures effective heat transfer/cooling. The performance of these type of oils have been proven in the field over many decades and the condition monitoring and diagnosis of transformer performance have been successfully monitored through oil properties and dissolved gas analysis methods successfully. Different type of gases representing various types of faults due to components or operating conditions effectively. While large amount of data base has been generated in the industry on dissolved gas analysis for mineral oil based transformer oils and various models for predicting the fault and analysis, oil specifications and standards have also been modified to include stray gassing limits which cover the low temperature faults and becomes an effective preventative maintenance tool that can benefit greatly to know the reasons for the breakdown of electrical insulating materials and related components. Natural esters have seen a rise in popularity in recent years due to their "green" credentials. Some of its benefits include biodegradability, a higher fire point, improvement in load capability of transformer and improved solid insulation life than mineral oils. However, the Stray gases evolution like hydrogen and hydrocarbons like methane (CH4) and ethane (C2H6) show very high values which are much higher than the limits of mineral oil standards. Though the standards for these type esters are yet to be evolved, the higher values of hydrocarbon gases that are available in the market is of concern which might be interpreted as a fault in transformer operation. The current paper focuses on developing a natural ester based transformer oil which shows very levels of stray gassing by standard test methods show much lower values compared to the products available currently and experimental results on various test conditions and the underlying mechanism explained.

Keywords: biodegadability, fire point, dissolved gassing analysis, stray gassing

Procedia PDF Downloads 91
571 Impact of Climate Change on Some Physiological Parameters of Cyclic Female Egyptian Buffalo

Authors: Nabil Abu-Heakal, Ismail Abo-Ghanema, Basma Hamed Merghani

Abstract:

The aim of this investigation is to study the effect of seasonal variations in Egypt on hematological parameters, reproductive and metabolic hormones of Egyptian buffalo-cows. This study lasted one year extending from December 2009 to November 2010 and was conducted on sixty buffalo-cows. Group of 5 buffalo-cows at estrus phase were selected monthly. Then, after blood sampling through tail vein puncture in the 2nd day after natural service, they were divided in two samples: one with anticoagulant for hematological analysis and the other without anticoagulant for serum separation. Results of this investigation revealed that the highest atmospheric temperature was in hot summer 32.61±1.12°C versus 26.18±1.67°C in spring and 19.92±0.70°C in winter season, while the highest relative humidity % was in winter season 43.50±1.60% versus 32.50±2.29% in summer season. The rise in temperature-humidity index from 63.73±1.29 in winter to 78.53±1.58 in summer indicates severe heat stress which is associated with significant reduction in total red blood cell count (3.20±0.15×106), hemoglobin concentration (8.83±0.43 g/dl), packed cell volume (30.73±0.12%), lymphocytes % (40.66±2.33 %), serum progesterone hormone concentration (0.56±0.03 ng/mll), estradiol17-B concentration (16.8±0.64 ng/ml), triiodothyronin (T3) concentration (2.33±0.33 ng/ml) and thyroxin hormone (T4) concentration (21.66±1.66 ng/ml), while hot summer resulted in significant increase in mean cell volume (96.55±2.25 fl), mean cell hemoglobin (30.81±1.33 pg), total white blood cell count (10.63±0.97×103), neutrophils % (49.66±2.33%), serum prolactin hormone (PRL) concentration (23.45±1.72 ng/ml) and cortisol hormone concentration (4.47±0.33 ng/ml) compared to winter season. There was no significant seasonal variation in mean cell hemoglobin concentration (MCHC). It was concluded that in Egypt there was a seasonal variation in atmospheric temperature, relative humidity, temperature humidity index (THI) and the rise in THI above the upper critical level (72 units), which, for lactating buffalo-cows in Egypt is the major constraint on buffalo-cows' hematological parameters and hormonal secretion that affects animal reproduction. Hence, we should improve climatic conditions inside the dairy farm to eliminate or reduce summer infertility.

Keywords: buffalo, climate change, Egypt, physiological parameters

Procedia PDF Downloads 649
570 Purification and Characterization of a Novel Extracellular Chitinase from Bacillus licheniformis LHH100

Authors: Laribi-Habchi Hasiba, Bouanane-Darenfed Amel, Drouiche Nadjib, Pausse André, Mameri Nabil

Abstract:

Chitin, a linear 1, 4-linked N-acetyl-d-glucosamine (GlcNAc) polysaccharide is the major structural component of fungal cell walls, insect exoskeletons and shells of crustaceans. It is one of the most abundant naturally occurring polysaccharides and has attracted tremendous attention in the fields of agriculture, pharmacology and biotechnology. Each year, a vast amount of chitin waste is released from the aquatic food industry, where crustaceans (prawn, crab, Shrimp and lobster) constitute one of the main agricultural products. This creates a serious environmental problem. This linear polymer can be hydrolyzed by bases, acids or enzymes such as chitinase. In this context an extracellular chitinase (ChiA-65) was produced and purified from a newly isolated LHH100. Pure protein was obtained after heat treatment and ammonium sulphate precipitation followed by Sephacryl S-200 chromatography. Based on matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF/MS) analysis, the purified enzyme is a monomer with a molecular mass of 65,195.13 Da. The sequence of the 27 N-terminal residues of the mature ChiA-65 showed high homology with family-18 chitinases. Optimal activity was achieved at pH 4 and 75◦C. Among the inhibitors and metals tested p-chloromercuribenzoic acid, N-ethylmaleimide, Hg2+ and Hg + completelyinhibited enzyme activity. Chitinase activity was high on colloidal chitin, glycol chitin, glycol chitosane, chitotriose and chitooligosaccharide. Chitinase activity towards synthetic substrates in the order of p-NP-(GlcNAc) n (n = 2–4) was p-NP-(GlcNAc)2> p-NP-(GlcNAc)4> p-NP-(GlcNAc)3. Our results suggest that ChiA-65 preferentially hydrolyzed the second glycosidic link from the non-reducing end of (GlcNAc) n. ChiA-65 obeyed Michaelis Menten kinetics the Km and kcat values being 0.385 mg, colloidal chitin/ml and5000 s−1, respectively. ChiA-65 exhibited remarkable biochemical properties suggesting that this enzyme is suitable for bioconversion of chitin waste.

Keywords: Bacillus licheniformis LHH100, characterization, extracellular chitinase, purification

Procedia PDF Downloads 432