Search results for: Likert scale test
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14212

Search results for: Likert scale test

1792 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target

Authors: Vishal Raj, Noorhan Abbas

Abstract:

Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.

Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)

Procedia PDF Downloads 108
1791 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods

Authors: Matthew D. Baffa

Abstract:

Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.

Keywords: emissivity, heat loss, infrared thermography, thermal conductance

Procedia PDF Downloads 313
1790 Self-Esteem and Emotional Intelligence’s Association to Nutritional Status in Adolescent Schoolchildren in Chile

Authors: Peter Mc Coll, Alberto Caro, Chiara Gandolfo, Montserrat Labbe, Francisca Schnaidt, Michela Palazzi

Abstract:

Self-esteem and emotional intelligence are variables that are related to people's nutritional status. Self-esteem may be at low levels in people living with obesity, while emotional intelligence can play an important role in the way people living with obesity cope. The objective of the study was to measure the association between self-esteem and emotional intelligence to nutritional status in adolescent population. Methodology: A cross-sectional study was carried out with 179 adolescent schoolchildren between 13 and 19 years old from a public school. The objective was to evaluate nutritional status; weight and height were measured by calculating the body mass index and Z score. Self-esteem was evaluated using the Coopersmith Self-esteem Inventory adapted by Brinkmann and Segure. Emotional intelligence was measured using the Emotional Quotient Inventory: short, by Bar On, adapted questionnaire, translated into Spanish by López Zafra. For statistical analysis: Pearson's Chi-square test, Pearson's correlation, and odd ratio calculation were used, with a p value at a significance level < 5%. Results: The study group was composed of 71% female and 29% male. The nutritional status was distributed as eutrophic 41.9%, overweight 20.1%, and obesity 21.1%. In relation to self-esteem, 44.1% presented low and very low levels, without differences by gender. Emotional intelligence was distributed: low 3.4%, medium 81%, and high 13.4% -no differences according to gender. The association between nutritional status (overweight and obesity) with low and very low self-esteem, an odds ratio of 2.5 (95% CI 1.12 – 5.59) was obtained with a p-value = 0.02. The correlation analysis between the intrapersonal sub-dimension emotional intelligence scores and the Z score of nutritional status presented a negative correlation of r = - 0.209 with a p-value < 0.005. The correlation between emotional intelligence subdimension stress management with Z score presented a positive correlation of r = 0.0161 with a p-value < 0.05. In conclusion, the group of adolescents studied had a high prevalence of overweight and obesity, a high prevalence of low self-esteem, and a high prevalence of average emotional intelligence. Overweight and obese adolescents were 2.5 times more likely to have low self-esteem. As overweight and obesity increase, self-esteem decreases, and the ability to manage stress increases.

Keywords: self-esteem, emotional intelligence, obesity, adolescent, nutritional status

Procedia PDF Downloads 59
1789 Teaching Research Methods at the Graduate Level Utilizing Flipped Classroom Approach; An Action Research Study

Authors: Munirah Alaboudi

Abstract:

This paper discusses a research project carried out with 12 first-year graduate students enrolled in research methods course prior to undertaking a graduate thesis during the academic year 2019. The research was designed for the objective of creating research methods course structure that embraces an individualized and activity-based approach to learning in a highly engaging group environment. This approach targeted innovating the traditional research methods lecture-based, theoretical format where students reported less engagement and limited learning. This study utilized action research methodology in developing a different approach to research methods course instruction where student performance indicators and feedback were periodically collected to assess the new teaching method. Student learning was achieved through utilizing the flipped classroom approach where students learned the material at home and classroom activities were designed to implement and experiment with the newly acquired information, with the guidance of the course instructor. Student learning in class was practiced through a series of activities based on different research methods. With the goal of encouraging student engagement, a wide range of activities was utilized including workshops, role play, mind-mapping, presentations, peer evaluations. Data was collected through an open-ended qualitative questionnaire to establish whether students were engaged in the material they were learning, and to what degree were they engaged, and to test their mastery level of the concepts discussed. Analysis of the data presented positive results as around 91% of the students reported feeling more engaged with the active learning experience and learning research by “actually doing research, not just reading about it”. The students expressed feeling invested in the process of their learning as they saw their research “gradually come to life” through peer learning and practice during workshops. Based on the results of this study, the research methods course structure was successfully remodeled and continues to be delivered.

Keywords: research methods, higher education instruction, flipped classroom, graduate education

Procedia PDF Downloads 103
1788 Exploratory Factor Analysis of Natural Disaster Preparedness Awareness of Thai Citizens

Authors: Chaiyaset Promsri

Abstract:

Based on the synthesis of related literatures, this research found thirteen related dimensions that involved the development of natural disaster preparedness awareness including hazard knowledge, hazard attitude, training for disaster preparedness, rehearsal and practice for disaster preparedness, cultural development for preparedness, public relations and communication, storytelling, disaster awareness game, simulation, past experience to natural disaster, information sharing with family members, and commitment to the community (time of living).  The 40-item of natural disaster preparedness awareness questionnaire was developed based on these thirteen dimensions. Data were collected from 595 participants in Bangkok metropolitan and vicinity. Cronbach's alpha was used to examine the internal consistency for this instrument. Reliability coefficient was 97, which was highly acceptable.  Exploratory Factor Analysis where principal axis factor analysis was employed. The Kaiser-Meyer-Olkin index of sampling adequacy was .973, indicating that the data represented a homogeneous collection of variables suitable for factor analysis. Bartlett's test of Sphericity was significant for the sample as Chi-Square = 23168.657, df = 780, and p-value < .0001, which indicated that the set of correlations in the correlation matrix was significantly different and acceptable for utilizing EFA. Factor extraction was done to determine the number of factors by using principal component analysis and varimax.  The result revealed that four factors had Eigen value greater than 1 with more than 60% cumulative of variance. Factor #1 had Eigen value of 22.270, and factor loadings ranged from 0.626-0.760. This factor was named as "Knowledge and Attitude of Natural Disaster Preparedness".  Factor #2 had Eigen value of 2.491, and factor loadings ranged from 0.596-0.696. This factor was named as "Training and Development". Factor #3 had Eigen value of 1.821, and factor loadings ranged from 0.643-0.777. This factor was named as "Building Experiences about Disaster Preparedness".  Factor #4 had Eigen value of 1.365, and factor loadings ranged from 0.657-0.760. This was named as "Family and Community". The results of this study provided support for the reliability and construct validity of natural disaster preparedness awareness for utilizing with populations similar to sample employed.

Keywords: natural disaster, disaster preparedness, disaster awareness, Thai citizens

Procedia PDF Downloads 378
1787 Sound Quality Analysis of Sloshing Noise from a Rectangular Tank

Authors: Siva Teja Golla, B. Venkatesham

Abstract:

The recent technologies in hybrid and high-end cars have subsided the noise from major sources like engines and transmission systems. This resulted in the unmasking of the previously subdued noises. These noises are becoming noticeable to the passengers, causing annoyance to them and affecting the perceived quality of the vehicle. Sloshing in the fuel tank is one such source of noise. Sloshing occurs due to the excitations undergone by the fuel tank due to the vehicle's movement. Sloshing noise occurs due to the interaction of the fluid with the surrounding tank walls or with the fluid itself. The noise resulting from the interaction of the fluid with the structure is ‘Hit noise’, and the noise due to fluid-fluid interaction is ‘Splash noise’. The type of interactions the fluid undergoes inside the tank, and the type of noise generated depends on a variety of factors like the fill level of the tank, type of fluid, presence of objects like baffles inside the tank, type and strength of the excitation, etc. There have been studies done to understand the effect of each of these parameters on the generation of different types of sloshing noises. But little work is done in the psychoacoustic aspect of these sounds. The psychoacoustic study of the sloshing noises gives an understanding of the level of annoyance it can cause to the passengers and helps in taking necessary measures to address it. In view of this, the current paper focuses on the calculation of the psychoacoustic parameters like loudness, sharpness, roughness and fluctuation strength for the sloshing noise. As the noise generation mechanisms for the hit and splash noises are different, these parameters are calculated separately for them. For this, the fluid flow regimes that predominantly cause the hit-and-splash noises are to be separately emulated inside the tank. This is done through a reciprocating test rig, which imposes reciprocating excitation to a rectangular tank filled with the fluid. By varying the frequency of excitation, the fluid flow regimes with the predominant generation of hit-and-splash noises can be separately created inside the tank. These tests are done in a quiet room and the noise generated is captured using microphones and is used for the calculation of psychoacoustic parameters of the sloshing noise. This study also includes the effect of fill level and the presence of baffles inside the tank on these parameters.

Keywords: sloshing, hit noise, splash noise, sound quality

Procedia PDF Downloads 29
1786 Collaboration between Dietician and Occupational Therapist, Promotes Independent Functional Eating in Tube Weaning Process of Mechanical Ventilated Patients

Authors: Inbal Zuriely, Yonit Weiss, Hilla Zaharoni, Hadas Lewkowicz, Tatiana Vander, Tarif Bader

Abstract:

early active movement, along with adjusting optimal nutrition, prevents aggravation of muscle degeneracy and functional decline. Eating is a basic activity of daily life, which reflects the patient's independence. When eating and feeding are experienced successfully, they lead to a sense of pleasure and satisfaction. However, when they are experienced as a difficulty, they might evoke feelings of helplessness and frustration. This stresses the essential process of gradual weaning off the enteral feeding tube. the work describes the collaboration of a dietitian, determining the nutritional needs of patients undergoing enteral tube weaning as part of the rehabilitation process, with the suited treatment of an occupational therapist. Occupational therapy intervention regarding eating capabilities focuses on improving the required motor and cognitive components, along with environmental adjustments and aids, imparting eating strategies and training to patients and their families. The project was conducted in the long-term, ventilated patients’ department at the Herzfeld Rehabilitation Geriatric Medical Center on patients undergoing enteral tube weaning with the staff’s assistance. Establishing continuous collaboration between the dietician and the occupational therapist, starting from the beginning of the feeding-tube weaning process: 1.The dietician updates the occupational therapist about the start of the process and the approved diet. 2.The occupational therapist performs cognitive, motor, and functional assessments and treatments regarding the patient’s eating capabilities and recommends the required adjustments for independent eating according to the FIM (Functional Independence Measure) scale. 3.The occupational therapist closely follows up on the patient’s degree of independence in eating and provides a repeated update to the dietician. 4.The dietician accordingly guides the ward staff on whether and how to feed the patient or allow independent eating. The project aimed to promote patients toward independent feeding, which leads to a sense of empowerment, enjoyment of the eating experience, and progress of functional ability, along with performing active movements that will motivate mobilization. From the beginning of 2022, 26 patients participated in the project. 79% of all patients who started the weaning process from tube feeding achieved different levels of independence in feeding (independence levels ranged from supervision (FIM-5) to complete independence (FIM-7). The integration of occupational therapy and dietary treatment is based on a patient-centered approach while considering the patient’s personal needs, preferences, and goals. This interdisciplinary partnership is essential for meeting the complex needs of prolonged mechanically ventilated patients and promotes independent functioning and quality of life.

Keywords: dietary, mechanical ventilation, occupational therapy, tube feeding weaning

Procedia PDF Downloads 78
1785 Use of Low-Cost Hydrated Hydrogen Sulphate-Based Protic Ionic Liquids for Extraction of Cellulose-Rich Materials from Common Wheat (Triticum Aestivum) Straw

Authors: Chris Miskelly, Eoin Cunningham, Beatrice Smyth, John. D. Holbrey, Gosia Swadzba-Kwasny, Emily L. Byrne, Yoan Delavoux, Mantian Li.

Abstract:

Recently, the use of ionic liquids (ILs) for the preparation of lignocellulose derived cellulosic materials as alternatives to petrochemical feedstocks has been the focus of considerable research interest. While the technical viability of IL-based lignocellulose treatment methodologies has been well established, the high cost of reagents inhibits commercial feasibility. This work aimed to assess the technoeconomic viability of the preparation of cellulose rich materials (CRMs) using protic ionic liquids (PILs) synthesized from low cost alkylamines and sulphuric acid. For this purpose, the tertiary alkylamines, triethylamine, and dimethylbutylamine were selected. Bulk scale production cost of the synthesized PILs, triethylammonium hydrogen sulphate and dimetheylbutylammonium hydrogen sulphate, was reported as $0.78 kg-1 to $1.24 kg-1. CRMs were prepared through the treatment of common wheat (Triticum aestivum) straw with these PILs. By controlling treatment parameters, CRMs with a cellulose content of ≥ 80 wt% were prepared. This was achieved using a T. aestivum straw to PIL loading ratio of 1:15 w/w, a treatment duration of 180 minutes, and ethanol as a cellulose antisolvent. Infrared spectra data and decreased onset degradation temperature of CRMs (ΔTONSET ~ 70 °C) suggested the formation of cellulose sulphate esters during treatment. Chemical derivatisation can aid the dispersion of prepared CRMs in non-polar polymer/ composite matrices, but act as a barrier to thermal processing at temperatures above 150 °C. It was also shown that treatment increased the crystallinity of CRMs (ΔCrI ~ 40 %) without altering the native crystalline structure or crystallite size (~ 2.6 nm) of cellulose; peaks associated with the cellulose I crystalline planes (110), (200), and (004) were observed at Bragg angles 16.0 °, 22.5 ° and 35.0 ° respectively. This highlighted the inability of assessed PILs to dissolve crystalline cellulose and was attributed to the high acidity (pKa ~ - 1.92 to - 6.42) of sulphuric acid derived anions. Electron micrographs revealed that the stratified multilayer tissue structure of untreated T. aestivum straw was significantly modified during treatment. T. aestivum straw particles were disassembled during treatment, with prepared CRMs adopting a golden-brown film-like appearance. This work demonstrated the degradation of non-cellulosic fractions of lignocellulose without dissolution of cellulose. It is the first to report on the derivatisation of cellulose during treatment with protic hydrogen sulphate ionic liquids, and the potential implications of this with reference to biopolymer feedstock preparation.

Keywords: cellulose, extraction, protic ionic liquids, esterification, thermal stability, waste valorisation, biopolymer feedstock

Procedia PDF Downloads 36
1784 Application of Human Biomonitoring and Physiologically-Based Pharmacokinetic Modelling to Quantify Exposure to Selected Toxic Elements in Soil

Authors: Eric Dede, Marcus Tindall, John W. Cherrie, Steve Hankin, Christopher Collins

Abstract:

Current exposure models used in contaminated land risk assessment are highly conservative. Use of these models may lead to over-estimation of actual exposures, possibly resulting in negative financial implications due to un-necessary remediation. Thus, we are carrying out a study seeking to improve our understanding of human exposure to selected toxic elements in soil: arsenic (As), cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb) resulting from allotment land-use. The study employs biomonitoring and physiologically-based pharmacokinetic (PBPK) modelling to quantify human exposure to these elements. We recruited 37 allotment users (adults > 18 years old) in Scotland, UK, to participate in the study. Concentrations of the elements (and their bioaccessibility) were measured in allotment samples (soil and allotment produce). Amount of produce consumed by the participants and participants’ biological samples (urine and blood) were collected for up to 12 consecutive months. Ethical approval was granted by the University of Reading Research Ethics Committee. PBPK models (coded in MATLAB) were used to estimate the distribution and accumulation of the elements in key body compartments, thus indicating the internal body burden. Simulating low element intake (based on estimated ‘doses’ from produce consumption records), predictive models suggested that detection of these elements in urine and blood was possible within a given period of time following exposure. This information was used in planning biomonitoring, and is currently being used in the interpretation of test results from biological samples. Evaluation of the models is being carried out using biomonitoring data, by comparing model predicted concentrations and measured biomarker concentrations. The PBPK models will be used to generate bioavailability values, which could be incorporated in contaminated land exposure models. Thus, the findings from this study will promote a more sustainable approach to contaminated land management.

Keywords: biomonitoring, exposure, PBPK modelling, toxic elements

Procedia PDF Downloads 319
1783 Comparison between the Roller-Foam and Neuromuscular Facilitation Stretching on Flexibility of Hamstrings Muscles

Authors: Paolo Ragazzi, Olivier Peillon, Paul Fauris, Mathias Simon, Raul Navarro, Juan Carlos Martin, Oriol Casasayas, Laura Pacheco, Albert Perez-Bellmunt

Abstract:

Introduction: The use of stretching techniques in the sports world is frequent and widely used for its many effects. One of the main benefits is the gain in flexibility, range of motion and facilitation of the sporting performance. Recently the use of Roller-Foam (RF) has spread in sports practice both at elite and recreational level for its benefits being similar to those observed in stretching. The objective of the following study is to compare the results of the Roller-Foam with the proprioceptive neuromuscular facilitation stretching (PNF) (one of the stretchings with more evidence) on the hamstring muscles. Study design: The design of the study is a single-blind, randomized controlled trial and the participants are 40 healthy volunteers. Intervention: The subjects are distributed randomly in one of the following groups; stretching (PNF) intervention group: 4 repetitions of PNF stretching (5seconds of contraction, 5 second of relaxation, 20 second stretch), Roller-Foam intervention group: 2 minutes of Roller-Foam was realized on the hamstring muscles. Main outcome measures: hamstring muscles flexibility was assessed at the beginning, during (30’’ of intervention) and the end of the session by using the Modified Sit and Reach test (MSR). Results: The baseline results data given in both groups are comparable to each other. The PNF group obtained an increase in flexibility of 3,1 cm at 30 seconds (first series) and of 5,1 cm at 2 minutes (the last of all series). The RF group obtained a 0,6 cm difference at 30 seconds and 2,4 cm after 2 minutes of application of roller foam. The results were statistically significant when comparing intragroups but not intergroups. Conclusions: Despite the fact that the use of roller foam is spreading in the sports and rehabilitation field, the results of the present study suggest that the gain of flexibility on the hamstrings is greater if PNF type stretches are used instead of RF. These results may be due to the fact that the use of roller foam intervened more in the fascial tissue, while the stretches intervene more in the myotendinous unit. Future studies are needed, increasing the sample number and diversifying the types of stretching.

Keywords: hamstring muscle, stretching, neuromuscular facilitation stretching, roller foam

Procedia PDF Downloads 186
1782 Shale Gas Accumulation of Over-Mature Cambrian Niutitang Formation Shale in Structure-Complicated Area, Southeastern Margin of Upper Yangtze, China

Authors: Chao Yang, Jinchuan Zhang, Yongqiang Xiong

Abstract:

The Lower Cambrian Niutitang Formation shale (NFS) deposited in the marine deep-shelf environment in Southeast Upper Yangtze (SUY), possess excellent source rock basis for shale gas generation, however, it is currently challenged by being over-mature with strong tectonic deformations, leading to much uncertainty of gas-bearing potential. With emphasis on the shale gas enrichment of the NFS, analyses were made based on the regional gas-bearing differences obtained from field gas-desorption testing of 18 geological survey wells across the study area. Results show that the NFS bears low gas content of 0.2-2.5 m³/t, and the eastern region of SUY is higher than the western region in gas content. Moreover, the methane fraction also presents the similar regional differentiation with the western region less than 10 vol.% while the eastern region generally more than 70 vol.%. Through the analysis of geological theory, the following conclusions are drawn: Depositional environment determines the gas-enriching zones. In the western region, the Dengying Formation underlying the NFS in unconformity contact was mainly plateau facies dolomite with caves and thereby bears poor gas-sealing ability. Whereas the Laobao Formation underling the NFS in eastern region was a set of siliceous rocks of shelf-slope facies, which can effectively prevent the shale gas from escaping away from the NFS. The tectonic conditions control the gas-enriching bands in the SUY, which is located in the fold zones formed by the thrust of the Southern China plate towards to the Sichuan Basin. Compared with the western region located in the trough-like folds, the eastern region at the fold-thrust belts was uplifted early and deformed weakly, resulting in the relatively less mature level and relatively slight tectonic deformation of the NFS. Faults determine whether shale gas can be accumulated in large scale. Four deep and large normal faults in the study area cut through the Niutitang Formation to the Sinian strata, directly causing a large spillover of natural gas in the adjacent areas. For the secondary faults developed within the shale formation, the reverse faults generally have a positive influence on the shale accumulation while the normal faults perform the opposite influence. Overall, shale gas enrichment targets of the NFS, are the areas with certain thickness of siliceous rocks at the basement of the Niutitang Formation, and near the margin of the paleouplift with less developed faults. These findings provide direction for shale gas exploration in South China, and also provide references for the areas with similar geological conditions all over the world.

Keywords: over-mature marine shale, shale gas accumulation, structure-complicated area, Southeast Upper Yangtze

Procedia PDF Downloads 147
1781 A Qualitative Research of Online Fraud Decision-Making Process

Authors: Semire Yekta

Abstract:

Many online retailers set up manual review teams to overcome the limitations of automated online fraud detection systems. This study critically examines the strategies they adapt in their decision-making process to set apart fraudulent individuals from non-fraudulent online shoppers. The study uses a mix method research approach. 32 in-depth interviews have been conducted alongside with participant observation and auto-ethnography. The study found out that all steps of the decision-making process are significantly affected by a level of subjectivity, personal understandings of online fraud, preferences and judgments and not necessarily by objectively identifiable facts. Rather clearly knowing who the fraudulent individuals are, the team members have to predict whether they think the customer might be a fraudster. Common strategies used are relying on the classification and fraud scorings in the automated fraud detection systems, weighing up arguments for and against the customer and making a decision, using cancellation to test customers’ reaction and making use of personal experiences and “the sixth sense”. The interaction in the team also plays a significant role given that some decisions turn into a group discussion. While customer data represent the basis for the decision-making, fraud management teams frequently make use of Google search and Google Maps to find out additional information about the customer and verify whether the customer is the person they claim to be. While this, on the one hand, raises ethical concerns, on the other hand, Google Street View on the address and area of the customer puts customers living in less privileged housing and areas at a higher risk of being classified as fraudsters. Phone validation is used as a final measurement to make decisions for or against the customer when previous strategies and Google Search do not suffice. However, phone validation is also characterized by individuals’ subjectivity, personal views and judgment on customer’s reaction on the phone that results in a final classification as genuine or fraudulent.

Keywords: online fraud, data mining, manual review, social construction

Procedia PDF Downloads 343
1780 Comparison of Depth of Cure and Degree of Conversion between Opus Bulk Fill and X-Tra Fill Bulk Fill Composites

Authors: Yasaman Samani, Ali Golmohammadi

Abstract:

Introduction: The degree of conversion and depth of cure affects the clinical success of resin composite restorations directly. One of the main challenges in achieving a successful composite restoration is the achievement of sufficient depth of cure. The insufficient polymerization may lead to a decrease in the physical/mechanical and biological properties of resin composites and, as a result of that, unsuccessful composite restoration. Thus, because of the importance of studying and evaluating the depth of cure and degree of conversion in bulk-fill composites, we decided to evaluate and compare the degree of conversion and depth of cure in two bulk-fill composites; x-tra fill (Voco, Germany) and Opus Bulk fill APS (FGM, Brazil). Materials and Methods: Composite resin specimens (n=10) per group were prepared as cylinder blocks (4×8 mm) with bulk-fill composites, x-tra fil (Voco, Germany) designated as Group A, and Opus Bulk fill APS (FGM, Brazil) designated as Group B. Depth of cure was determined according to “ISO 4049; Depth of Cure” method, In which each specimen were cured (iLED, Woodpecker, China) 40 seconds and FTIR spectroscopy method was used to estimate the degree of conversion of both the bulk-fill composites. The degree of conversion of monomer to polymer was estimated individually in the coronal half (Group A1 and B1) and pulpal half (Group A2 and Group B2) by dividing each specimen into two halves. The data were analyzed using a Student’s t-test and one-way ANOVA at a 5% level of significance. Results: The mean depth of cure in x-tra fil (Voco, Germany) was 3.99 (±0.16), and for Opus Bulk fill, APS (FGM, Brazil) was 2.14 (±0.3). The degree of conversion percentage in Group A1 was 82.7 (±6.1), in group A2 was 73.4 (±5.2), in group B1 was 63.3 (±4.7) and in Group B2 was 56.5 (±7.7). Statistical analysis revealed a significant difference in the depth of cure between the two bulk-fill composites with x-tra fil (Voco, Germany) higher than Opus Bulk fill APS (FGM, Brazil) (P<0.001). The degree of conversion percentage also showed a significant difference, Group A1 being higher than A2 (P=0.0085), B1, and B2 (P<0.001). Group A2 was also higher than B1 (P=0.003) and B2 (P<0.001). There was no significant difference between B1 and B2 (P=0.072). Conclusion: The results indicate that x-tra fill has more depth of cure and a higher percentage of the degree of conversion than Opus Bulk fill APS. The coronal half of x-tra fil had the highest depth of cure percentage (82.66%), and the pulpal half of Opus Bulk fill APS had the lowest percentage (56.45%). Even though both bulk-fill composite materials had an acceptable degree of conversion (55% and higher), x-tra fill has shown better results.

Keywords: depth of cure, degree of conversion, bulk-fill composite, FTIR

Procedia PDF Downloads 102
1779 The Effectiveness of Guest Lecturers with Disabilities in the Classroom

Authors: Afshin Gharib

Abstract:

Often, instructors prefer to bring into class a guest lecturer who can provide an “experiential” perspective on a particular topic. The assumption is that the personal experience brought into the classroom makes the material resonate more with students and that students would have a preference for material being taught from an experiential perspective. The question we asked in the present study was whether a guest lecture from an “experiential” expert with a disability (e.g. a guest suffering from cone-rod dystrophy lecturing on vision, or a dyslexic lecturing on the psychology of reading) would be more effective than the course instructor in capturing students attention and conveying information in an Introduction to Psychology class. Students in two sections of Introduction to Psychology (N = 25 in each section) listened to guest lecturers with disabilities lecturing on a topic related to their disability, one in the area of Sensation and Perception (the guest lecturer is vision impaired) and one in the area of Language Development (the guest lecturer is dyslexic). The Guest lecturers lectured on the same topic in both sections, however, each lecturer used their own experiences to highlight the topics they cover in one section but not the other (counterbalanced between sections), providing students in one section with experiential testimony. Following each of the 4 lectures (two experiential, two non-experiential) students rated the lecture on several dimensions including overall quality, level of engagement, and performance. In addition, students in both sections were tested on the same test items from the lecture material to ascertain degree of learning, and given identical “pop” quizzes two weeks after the exam to measure retention. It was hypothesized that students would find the experiential lectures from lecturers talking about their disabilities more engaging, learn more from them, and retain the material for longer. We found that students in fact preferred the course instructor to the guests, regardless of whether the guests included a discussion of their own disability in their lectures. Performance on the exam questions and the pop quiz items were not different between “experiential” and “non-experiential” lectures, suggesting that guest lecturers who discuss their own disabilities in lecture are not more effective in conveying material and students are not more likely to retain material delivered by “experiential” guests. In future research we hope to explore the reasons for students preference for their regular instructor over guest lecturers.

Keywords: guest lecturer, student perception, retention, experiential

Procedia PDF Downloads 17
1778 Interstellar Mission to Wolf 359: Possibilities for the Future

Authors: Rajasekar Anand Thiyagarajan

Abstract:

One of the driving forces of mankind is the “le r`eve d'etoiles" or the “dream of stars", which has been the dynamo of our civilization. Since the beginning of the dawn of the civilization, mankind has looked upon the heavens with wonder and he has tried to understand the meaning of those twinkling lights. As human history has progressed, the understanding of those twinkling lights has progressed, as we now know a lot of information about stars. However, the dream of stars or the dream of reaching those stars always remains within the expectations of mankind. In fact, the needs of the civilization constantly drive for better knowledge and the capability of reaching those stars is one such way that knowledge and exultation can be achieved. This paper takes a futuristic case study of an interstellar mission to Wolf 359, which is approximately 8.3 light years away from us. In terms of galactic distances, 8.3 light years is not much, but as far as present space technology capabilities are concerned, it is next to impossible for us to reach those distances. Several studies have been conducted on various missions to Alpha Centauri and other nearby stars such as Barnard's star and Wolf 359. However, taking a more distant star such as Wolf 359 will help test the mankind's drive for interstellar exploration, as exotic means of travel are needed. This paper will take a futuristic case study of the event and various possibilities of space travel will be discussed in detail. Comprehensive tables and graphs will be given, which will depict the amount of time that will pass at each mode of travel and more importantly some idea on the cost in terms of energy as well as money will be discussed within today's context. In addition, prerequisites to an interstellar mission to Wolf 359 will be given in detail as well as a sample mission which will take place to that particular destination. Even though the possibility of such a mission is probably nonexistent for the 21st century, it is essential to do these exercises so that mankind's understanding of the universe will be increased. In addition, this paper hopes to establish some general guidelines for such an interstellar mission.

Keywords: wolf 359, interstellar mission, alpha centauri, core diameter, core length, reflector thickness enrichment, gas temperature, reflector temperature, power density, mass of the space craft, acceleration of the space craft, time expansion

Procedia PDF Downloads 428
1777 Tourism Policy Challenges in Post-Soviet Georgia

Authors: Merab Khokhobaia

Abstract:

The research of Georgian tourism policy challenges is important, as the tourism can play an increasing role for the economic growth and improvement of standard of living of the country even with scanty resources, at the expense of improved creative approaches. It is also important to make correct decisions at macroeconomic level, which will be accordingly reflected in the successful functioning of the travel companies and finally, in the improvement of economic indicators of the country. In order to correctly orient sectoral policy, it is important to precisely determine its role in the economy. Development of travel industry has been considered as one of the priorities in Georgia; the country has unique cultural heritage and traditions, as well as plenty of natural resources, which are a significant precondition for the development of tourism. Despite the factors mentioned above, the existing resources are not completely utilized and exploited. This work represents a study of subjective, as well as objective reasons of ineffective functioning of the sector. During the years of transformation experienced by Georgia, the role of travel industry in economic development of the country represented the subject of continual discussions. Such assessments were often biased and they did not rest on specific calculations. This topic became especially popular on the ground of market economy, because reliable statistical data have a particular significance in the designing of tourism policy. In order to deeply study the aforementioned issue, this paper analyzes monetary, as well as non-monetary indicators. The research widely included the tourism indicators system; we analyzed the flaws in reporting of the results of tourism sector in Georgia. Existing defects are identified and recommendations for their improvement are offered. For stable development tourism, similarly to other economic sectors, needs a well-designed policy from the perspective of national, as well as local, regional development. The tourism policy must be drawn up in order to efficiently achieve our goals, which were established in short-term and long-term dynamics on the national or regional scale of specific country. The article focuses on the role and responsibility of the state institutes in planning and implementation of the tourism policy. The government has various tools and levers, which may positively influence the processes. These levers are especially important in terms of international, as well as internal tourism development. Within the framework of this research, the regulatory documents, which are in force in relation to this industry, were also analyzed. The main attention is turned to their modernization and necessity of their compliance with European standards. It is a current issue to direct the efforts of state policy on support of business by implementing infrastructural projects, as well as by development of human resources, which may be possible by supporting the relevant higher and vocational studying-educational programs.

Keywords: regional development, tourism industry, tourism policy, transition

Procedia PDF Downloads 263
1776 Optimization of Alkali Assisted Microwave Pretreatments of Sorghum Straw for Efficient Bioethanol Production

Authors: Bahiru Tsegaye, Chandrajit Balomajumder, Partha Roy

Abstract:

The limited supply and related negative environmental consequence of fossil fuels are driving researcher for finding sustainable sources of energy. Lignocellulose biomass like sorghum straw is considered as among cheap, renewable and abundantly available sources of energy. However, lignocellulose biomass conversion to bioenergy like bioethanol is hindered due to the reluctant nature of lignin in the biomass. Therefore, removal of lignin is a vital step for lignocellulose conversion to renewable energy. The aim of this study is to optimize microwave pretreatment conditions using design expert software to remove lignin and to release maximum possible polysaccharides from sorghum straw for efficient hydrolysis and fermentation process. Sodium hydroxide concentration between 0.5-1.5%, v/v, pretreatment time from 5-25 minutes and pretreatment temperature from 120-2000C were considered to depolymerize sorghum straw. The effect of pretreatment was studied by analyzing the compositional changes before and after pretreatments following renewable energy laboratory procedure. Analysis of variance (ANOVA) was used to test the significance of the model used for optimization. About 32.8%-48.27% of hemicellulose solubilization, 53% -82.62% of cellulose release, and 49.25% to 78.29% lignin solubilization were observed during microwave pretreatment. Pretreatment for 10 minutes with alkali concentration of 1.5% and temperature of 1400C released maximum cellulose and lignin. At this optimal condition, maximum of 82.62% of cellulose release and 78.29% of lignin removal was achieved. Sorghum straw at optimal pretreatment condition was subjected to enzymatic hydrolysis and fermentation. The efficiency of hydrolysis was measured by analyzing reducing sugars by 3, 5 dinitrisylicylic acid method. Reducing sugars of about 619 mg/g of sorghum straw were obtained after enzymatic hydrolysis. This study showed a significant amount of lignin removal and cellulose release at optimal condition. This enhances the yield of reducing sugars as well as ethanol yield. The study demonstrates the potential of microwave pretreatments for enhancing bioethanol yield from sorghum straw.

Keywords: cellulose, hydrolysis, lignocellulose, optimization

Procedia PDF Downloads 271
1775 Development of a Paediatric Head Model for the Computational Analysis of Head Impact Interactions

Authors: G. A. Khalid, M. D. Jones, R. Prabhu, A. Mason-Jones, W. Whittington, H. Bakhtiarydavijani, P. S. Theobald

Abstract:

Head injury in childhood is a common cause of death or permanent disability from injury. However, despite its frequency and significance, there is little understanding of how a child’s head responds during injurious loading. Whilst Infant Post Mortem Human Subject (PMHS) experimentation is a logical approach to understand injury biomechanics, it is the authors’ opinion that a lack of subject availability is hindering potential progress. Computer modelling adds great value when considering adult populations; however, its potential remains largely untapped for infant surrogates. The complexities of child growth and development, which result in age dependent changes in anatomy, geometry and physical response characteristics, present new challenges for computational simulation. Further geometric challenges are presented by the intricate infant cranial bones, which are separated by sutures and fontanelles and demonstrate a visible fibre orientation. This study presents an FE model of a newborn infant’s head, developed from high-resolution computer tomography scans, informed by published tissue material properties. To mimic the fibre orientation of immature cranial bone, anisotropic properties were applied to the FE cranial bone model, with elastic moduli representing the bone response both parallel and perpendicular to the fibre orientation. Biofiedility of the computational model was confirmed by global validation against published PMHS data, by replicating experimental impact tests with a series of computational simulations, in terms of head kinematic responses. Numerical results confirm that the FE head model’s mechanical response is in favourable agreement with the PMHS drop test results.

Keywords: finite element analysis, impact simulation, infant head trauma, material properties, post mortem human subjects

Procedia PDF Downloads 326
1774 Levels of Students’ Understandings of Electric Field Due to a Continuous Charged Distribution: A Case Study of a Uniformly Charged Insulating Rod

Authors: Thanida Sujarittham, Narumon Emarat, Jintawat Tanamatayarat, Kwan Arayathanitkul, Suchai Nopparatjamjomras

Abstract:

Electric field is an important fundamental concept in electrostatics. In high-school, generally Thai students have already learned about definition of electric field, electric field due to a point charge, and superposition of electric fields due to multiple-point charges. Those are the prerequisite basic knowledge students holding before entrancing universities. In the first-year university level, students will be quickly revised those basic knowledge and will be then introduced to a more complicated topic—electric field due to continuous charged distributions. We initially found that our freshman students, who were from the Faculty of Science and enrolled in the introductory physic course (SCPY 158), often seriously struggled with the basic physics concepts—superposition of electric fields and inverse square law and mathematics being relevant to this topic. These also then resulted on students’ understanding of advanced topics within the course such as Gauss's law, electric potential difference, and capacitance. Therefore, it is very important to determine students' understanding of electric field due to continuous charged distributions. The open-ended question about sketching net electric field vectors from a uniformly charged insulating rod was administered to 260 freshman science students as pre- and post-tests. All of their responses were analyzed and classified into five levels of understandings. To get deep understanding of each level, 30 students were interviewed toward their individual responses. The pre-test result found was that about 90% of students had incorrect understanding. Even after completing the lectures, there were only 26.5% of them could provide correct responses. Up to 50% had confusions and irrelevant ideas. The result implies that teaching methods in Thai high schools may be problematic. In addition for our benefit, these students’ alternative conceptions identified could be used as a guideline for developing the instructional method currently used in the course especially for teaching electrostatics.

Keywords: alternative conceptions, electric field of continuous charged distributions, inverse square law, levels of student understandings, superposition principle

Procedia PDF Downloads 295
1773 Phi Thickening Induction as a Response to Abiotic Stress in the Orchid Miltoniopsis

Authors: Nurul Aliaa Idris, David A. Collings

Abstract:

Phi thickenings are specialized secondary cell wall thickenings that are found in the cortex of the roots in a wide range of plant species, including orchids. The role of phi thickenings in the root is still under debate through research have linked environmental conditions, particularly abiotic stresses such as water stress, heavy metal stress and salinity to their induction in the roots. It has also been suggested that phi thickenings may act as a barrier to regulate solute uptake, act as a physical barrier against fungal hyphal penetration due to its resemblance to the Casparian strip and play a mechanical role to support cortical cells. We have investigated phi thickening function in epiphytic orchids of the genus Miltoniopsis through induction experiment against factors such as soil compaction and water stress. The permeability of the phi thickenings in Miltoniopsis was tested through uptake experiments using the fluorescent tracer dyes Calcofluor white, Lucifer yellow and Propidium iodide then viewed with wide-field or confocal microscopy. To test whether phi thickening may prevent fungal colonization in the root cell, fungal re-infection experiment was conducted by inoculating isolated symbiotic fungus to sterile in vitro Miltoniopsis explants. As the movement of fluorescent tracers through the apoplast was not blocked by phi thickenings, and as phi thickenings developed in the roots of sterile cultures in the absence of fungus and did not prevent fungal colonization of cortical cells, the phi thickenings in Miltoniopsis do not function as a barrier. Phi thickenings were found to be absent in roots grown on agar and remained absent when plants were transplanted to moist soil. However, phi thickenings were induced when plants were transplanted to well-drained media, and by the application of water stress in all soils tested. It is likely that phi thickenings stabilize the root cortex during dehydration. Nevertheless, the varied induction responses present in different plant species suggest that the phi thickenings may play several adaptive roles, instead of just one, depending on species.

Keywords: abiotic stress, Miltoniopsis, orchid, phi thickening

Procedia PDF Downloads 147
1772 Effect of Different Methods to Control the Parasitic Weed Phelipanche ramosa (L. Pomel) in Tomato Crop

Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L, Tarantino E.

Abstract:

The Phelipanche ramosa is considered the most damaging obligate flowering parasitic weed on a wide species of cultivated plants. The semiarid regions of the world are considered the main center of this parasitic weed, where heavy infestation are due to the ability to produce high numbers of seeds (up to 200,000), that remain viable for extended period (more than 19 years). In this paper 13 treatments of parasitic weed control, as physical, chemical, biological and agronomic methods, including the use of the resistant plants, have been carried out. In 2014 a trial was performed on processing tomato (cv Docet), grown in pots filled with soil taken from a plot heavily infested by Phelipanche ramosa, at the Department of Agriculture, Food and Environment, University of Foggia (southern Italy). Tomato seedlings were transplanted on August 8, 2014 on a clay soil (USDA) 100 kg ha-1 of N; 60 kg ha-1 of P2O5 and 20 kg ha-1 of S. Afterwards, top dressing was performed with 70 kg ha-1 of N. The randomized block design with 3 replicates was adopted. During the growing cycle of the tomato, at 70-75-81 and 88 days after transplantation the number of parasitic shoots emerged in each pot was detected. Also values of leaf chlorophyll Meter SPAD of tomato plants were measured. All data were subjected to analysis of variance (ANOVA) using the JMP software (SAS Institute Inc., Cary, NC, USA), and for comparison of means was used Tukey's test. The results show lower values of the color index SPAD in tomato plants parasitized compared to those healthy. In addition, each treatment studied did not provide complete control against Phelipanche ramosa. However the virulence of the attacks was mitigated by some treatments: radicon product, compost activated with Fusarium, mineral fertilizer nitrogen, sulfur, enzone and resistant tomato genotype. It is assumed that these effects can be improved by combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.

Keywords: control methods, Phelipanche ramose, tomato crop

Procedia PDF Downloads 614
1771 Study of Chemical State Analysis of Rubidium Compounds in Lα, Lβ₁, Lβ₃,₄ and Lγ₂,₃ X-Ray Emission Lines with Wavelength Dispersive X-Ray Fluorescence Spectrometer

Authors: Harpreet Singh Kainth

Abstract:

Rubidium salts have been commonly used as an electrolyte to improve the efficiency cycle of Li-ion batteries. In recent years, it has been implemented into the large scale for further technological advances to improve the performance rate and better cyclability in the batteries. X-ray absorption spectroscopy (XAS) is a powerful tool for obtaining the information in the electronic structure which involves the chemical state analysis in the active materials used in the batteries. However, this technique is not well suited for the industrial applications because it needs a synchrotron X-ray source and special sample file for in-situ measurements. In contrast to this, conventional wavelength dispersive X-ray fluorescence (WDXRF) spectrometer is nondestructive technique used to study the chemical shift in all transitions (K, L, M, …) and does not require any special pre-preparation planning. In the present work, the fluorescent Lα, Lβ₁ , Lβ₃,₄ and Lγ₂,₃ X-ray spectra of rubidium in different chemical forms (Rb₂CO₃ , RbCl, RbBr, and RbI) have been measured first time with high resolution wavelength dispersive X-ray fluorescence (WDXRF) spectrometer (Model: S8 TIGER, Bruker, Germany), equipped with an Rh anode X-ray tube (4-kW, 60 kV and 170 mA). In ₃₇Rb compounds, the measured energy shifts are in the range (-0.45 to - 1.71) eV for Lα X-ray peak, (0.02 to 0.21) eV for Lβ₁ , (0.04 to 0.21) eV for Lβ₃ , (0.15 to 0.43) eV for Lβ₄ and (0.22 to 0.75) eV for Lγ₂,₃ X-ray emission lines. The chemical shifts in rubidium compounds have been measured by considering Rb₂CO₃ compounds taking as a standard reference. A Voigt function is used to determine the central peak position of all compounds. Both positive and negative shifts have been observed in L shell emission lines. In Lα X-ray emission lines, all compounds show negative shift while in Lβ₁, Lβ₃,₄, and Lγ₂,₃ X-ray emission lines, all compounds show a positive shift. These positive and negative shifts result increase or decrease in X-ray energy shifts. It looks like that ligands attached with central metal atom attract or repel the electrons towards or away from the parent nucleus. This pulling and pushing character of rubidium affects the central peak position of the compounds which causes a chemical shift. To understand the chemical effect more briefly, factors like electro-negativity, line intensity ratio, effective charge and bond length are responsible for the chemical state analysis in rubidium compounds. The effective charge has been calculated from Suchet and Pauling method while the line intensity ratio has been calculated by calculating the area under the relevant emission peak. In the present work, it has been observed that electro-negativity, effective charge and intensity ratio (Lβ₁/Lα, Lβ₃,₄/Lα and Lγ₂,₃/Lα) are inversely proportional to the chemical shift (RbCl > RbBr > RbI), while bond length has been found directly proportional to the chemical shift (RbI > RbBr > RbCl).

Keywords: chemical shift in L emission lines, bond length, electro-negativity, effective charge, intensity ratio, Rubidium compounds, WDXRF spectrometer

Procedia PDF Downloads 507
1770 Structural Analysis and Evolution of 18th Century Ottoman Imperial Mosques (1750-1799) in Comparison with the Classical Period Examples

Authors: U. Demir

Abstract:

18th century which is the period of 'change' in the Ottoman Empire, affects the architecture as well, where the Classical period is left behind, architecture is differentiated in the form language. This change is especially noticeable in monumental buildings and thus manifested itself in the mosques. But, is it possible to talk about the structural context of the 'change' which has been occurred in decoration? The aim of this study is to investigate the changes and classical relations of the 18th century mosques through plan schedules and structure systems. This study focuses on the monumental mosques constructed during the reign of the three sultans who ruled in the second half of the century (Mustafa the 3rd 1757-1774, Abdülhamid the 1st 1774-1789 and Selim the 3rd). According to their construction years these are 'Ayazma, Laleli, Zeyneb Sultan, Fatih, Beylerbeyi, Şebsefa Kadın, Eyüb Sultan, Mihrişah Valide Sultan and Üsküdar-Selimiye' mosques. As a plan scheme, four mosques have a square or close to a rectangular square scheme, while the others have a rectangle scheme and showing the longitudinal development of the mihrab axis. This situation is widespread throughout the period. In addition to the longitudinal development plan, which is the general characteristic of the 18th century mosques, the use of the classical plan schemes continued in the same direction. Spatialization of the mihrab area was applied to the five mosques while other mosques were applied as niches on the wall surface. This situation is widespread in the period of the second half of the century. In the classical period, the lodges may be located at the back of the mosques interior, not interfering with the main worship area. In the period, the lodges were withdrawn from the main worship area. They are separated from the main interior with their own structural and covering systems. The plans seem to be formed as a result of the addition of lodge parts to the northern part of the Classical period mosques. The 18th century mosques are the constructions where the change of the architectural language and style can be observed easily. This change and the break from the classical period manifest themselves quickly in the structural elements, wall surface decorations, pencil work designs, small scale decor elements, motifs. The speed and intensity of change in the decor does not occur the same as in structural context. The mosque construction rules from the traditional and classical era still continues in the century. While some mosque structures have a plan which is inherited from the classical successor, some of were constructed with the same classical period rules. Nonetheless, the location and transformation of the lodges, which are affecting the interior design, are noteworthy. They provide a significant transition on the way to the new language of the mosque design that will be experienced in the next century. It is intended to draw attention to the structural evolution of the 18th century Ottoman architecture through the royal mosques within the scope of this conference.

Keywords: mosque structure, Ottoman architecture, structural evolution, 18th century architecture

Procedia PDF Downloads 200
1769 Vicarious Cues in Portraying Emotion: Musicians' Self-Appraisal

Authors: W. Linthicum-Blackhorse, P. Martens

Abstract:

This present study seeks to discover attitudinal commonalities and differences within a musician population relative to the communication of emotion via music. We hypothesized that instrument type, as well as age and gender, would bear significantly on musicians’ opinions. A survey was administered to 178 participants; 152 were current music majors (mean age 20.3 years, 62 female) and 26 were adult participants in a community choir (mean age 54.0 years, 12 female). The adult participants were all vocalists, while student participants represented the full range of orchestral instruments. The students were grouped by degree program, (performance, music education, or other) and instrument type (voice, brass, woodwinds, strings, percussion). The survey asked 'How important are each of the following areas to you for portraying emotion in music?' Participants were asked to rate each of 15 items on a scale of 1 (not at all important) to 10 (very important). Participants were also instructed to leave blank any item that they did not understand. The 15 items were: dynamic contrast, overall volume, phrasing, facial expression, staging (placement), pitch accuracy, tempo changes, bodily movement, your mood, your attitude, vibrato, rubato, stage/room lighting, clothing type, and clothing color. Contrary to our hypothesis, there was no overall effect of gender or age, and neither did any single response item show a significant difference due to these subject parameters. Among the student participants, however, one-way ANOVA revealed a significant effect of degree program on the rated importance of four items: dynamic contrast, tempo changes, vibrato, and rubato. Significant effects of instrument type were found in the responses to eight items: facial expression, staging, body movement, vibrato, rubato, lighting, clothing type, and clothing color. Post hoc comparisons (Tukey) show that some variation follows from obvious differences between instrument types (e.g. string players are more concerned with vibrato than everyone but woodwind players; vocalists are significantly more concerned with facial expression than everyone but string players), but other differences could point to communal mindsets toward vicarious cues within instrument type. These mindsets could be global (e.g. brass players deeming body movement significantly less important than string players, being less often featured as soloists and appearing less often at the front of the stage) or local (e.g. string players being significantly more concerned than all other groups about both clothing color and type, perhaps due to the strongly-expressed opinions of specific teachers). Future work will attempt to identify the source of these self-appraisals, whether enculturated via explicit pedagogy, or whether absorbed from individuals' observations and performance experience.

Keywords: performance, vicarious cues, communication, emotion

Procedia PDF Downloads 110
1768 3D Microscopy, Image Processing, and Analysis of Lymphangiogenesis in Biological Models

Authors: Thomas Louis, Irina Primac, Florent Morfoisse, Tania Durre, Silvia Blacher, Agnes Noel

Abstract:

In vitro and in vivo lymphangiogenesis assays are essential for the identification of potential lymphangiogenic agents and the screening of pharmacological inhibitors. In the present study, we analyse three biological models: in vitro lymphatic endothelial cell spheroids, in vivo ear sponge assay, and in vivo lymph node colonisation by tumour cells. These assays provide suitable 3D models to test pro- and anti-lymphangiogenic factors or drugs. 3D images were acquired by confocal laser scanning and light sheet fluorescence microscopy. Virtual scan microscopy followed by 3D reconstruction by image aligning methods was also used to obtain 3D images of whole large sponge and ganglion samples. 3D reconstruction, image segmentation, skeletonisation, and other image processing algorithms are described. Fixed and time-lapse imaging techniques are used to analyse lymphatic endothelial cell spheroids behaviour. The study of cell spatial distribution in spheroid models enables to detect interactions between cells and to identify invasion hierarchy and guidance patterns. Global measurements such as volume, length, and density of lymphatic vessels are measured in both in vivo models. Branching density and tortuosity evaluation are also proposed to determine structure complexity. Those properties combined with vessel spatial distribution are evaluated in order to determine lymphangiogenesis extent. Lymphatic endothelial cell invasion and lymphangiogenesis were evaluated under various experimental conditions. The comparison of these conditions enables to identify lymphangiogenic agents and to better comprehend their roles in the lymphangiogenesis process. The proposed methodology is validated by its application on the three presented models.

Keywords: 3D image segmentation, 3D image skeletonisation, cell invasion, confocal microscopy, ear sponges, light sheet microscopy, lymph nodes, lymphangiogenesis, spheroids

Procedia PDF Downloads 378
1767 Neural Network Mechanisms Underlying the Combination Sensitivity Property in the HVC of Songbirds

Authors: Zeina Merabi, Arij Dao

Abstract:

The temporal order of information processing in the brain is an important code in many acoustic signals, including speech, music, and animal vocalizations. Despite its significance, surprisingly little is known about its underlying cellular mechanisms and network manifestations. In the songbird telencephalic nucleus HVC, a subset of neurons shows temporal combination sensitivity (TCS). These neurons show a high temporal specificity, responding differently to distinct patterns of spectral elements and their combinations. HVC neuron types include basal-ganglia-projecting HVCX, forebrain-projecting HVCRA, and interneurons (HVC¬INT), each exhibiting distinct cellular, electrophysiological and functional properties. In this work, we develop conductance-based neural network models connecting the different classes of HVC neurons via different wiring scenarios, aiming to explore possible neural mechanisms that orchestrate the combination sensitivity property exhibited by HVCX, as well as replicating in vivo firing patterns observed when TCS neurons are presented with various auditory stimuli. The ionic and synaptic currents for each class of neurons that are presented in our networks and are based on pharmacological studies, rendering our networks biologically plausible. We present for the first time several realistic scenarios in which the different types of HVC neurons can interact to produce this behavior. The different networks highlight neural mechanisms that could potentially help to explain some aspects of combination sensitivity, including 1) interplay between inhibitory interneurons’ activity and the post inhibitory firing of the HVCX neurons enabled by T-type Ca2+ and H currents, 2) temporal summation of synaptic inputs at the TCS site of opposing signals that are time-and frequency- dependent, and 3) reciprocal inhibitory and excitatory loops as a potent mechanism to encode information over many milliseconds. The result is a plausible network model characterizing auditory processing in HVC. Our next step is to test the predictions of the model.

Keywords: combination sensitivity, songbirds, neural networks, spatiotemporal integration

Procedia PDF Downloads 65
1766 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 326
1765 Digital Twins: Towards an Overarching Framework for the Built Environment

Authors: Astrid Bagireanu, Julio Bros-Williamson, Mila Duncheva, John Currie

Abstract:

Digital Twins (DTs) have entered the built environment from more established industries like aviation and manufacturing, although there has never been a common goal for utilising DTs at scale. Defined as the cyber-physical integration of data between an asset and its virtual counterpart, DT has been identified in literature from an operational standpoint – in addition to monitoring the performance of a built asset. However, this has never been translated into how DTs should be implemented into a project and what responsibilities each project stakeholder holds in the realisation of a DT. What is needed is an approach to translate these requirements into actionable DT dimensions. This paper presents a foundation for an overarching framework specific to the built environment. For the purposes of this research, the UK widely used the Royal Institute of British Architects (RIBA) Plan of Work from 2020 is used as a basis for itemising project stages. The RIBA Plan of Work consists of eight stages designed to inform on the definition, briefing, design, coordination, construction, handover, and use of a built asset. Similar project stages are utilised in other countries; therefore, the recommendations from the interviews presented in this paper are applicable internationally. Simultaneously, there is not a single mainstream software resource that leverages DT abilities. This ambiguity meets an unparalleled ambition from governments and industries worldwide to achieve a national grid of interconnected DTs. For the construction industry to access these benefits, it necessitates a defined starting point. This research aims to provide a comprehensive understanding of the potential applications and ramifications of DT in the context of the built environment. This paper is an integral part of a larger research aimed at developing a conceptual framework for the Architecture, Engineering, and Construction (AEC) sector following a conventional project timeline. Therefore, this paper plays a pivotal role in providing practical insights and a tangible foundation for developing a stage-by-stage approach to assimilate the potential of DT within the built environment. First, the research focuses on a review of relevant literature, albeit acknowledging the inherent constraint of limited sources available. Secondly, a qualitative study compiling the views of 14 DT experts is presented, concluding with an inductive analysis of the interview findings - ultimately highlighting the barriers and strengths of DT in the context of framework development. As parallel developments aim to progress net-zero-centred design and improve project efficiencies across the built environment, the limited resources available to support DTs should be leveraged to propel the industry to reach its digitalisation era, in which AEC stakeholders have a fundamental role in understanding this, from the earliest stages of a project.

Keywords: digital twins, decision-making, design, net-zero, built environment

Procedia PDF Downloads 122
1764 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.

Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation

Procedia PDF Downloads 239
1763 Reactors with Effective Mixing as a Solutions for Micro-Biogas Plant

Authors: M. Zielinski, M. Debowski, P. Rusanowska, A. Glowacka-Gil, M. Zielinska, A. Cydzik-Kwiatkowska, J. Kazimierowicz

Abstract:

Technologies for the micro-biogas plant with heating and mixing systems are presented as a part of the Research Coordination for a Low-Cost Biomethane Production at Small and Medium Scale Applications (Record Biomap). The main objective of the Record Biomap project is to build a network of operators and scientific institutions interested in cooperation and the development of promising technologies in the sector of small and medium-sized biogas plants. The activities carried out in the project will bridge the gap between research and market and reduce the time of implementation of new, efficient technological and technical solutions. Reactor with simultaneously mixing and heating system is a concrete tank with a rectangular cross-section. In the reactor, heating is integrated with the mixing of substrate and anaerobic sludge. This reactor is solution dedicated for substrates with high solids content, which cannot be introduced to the reactor with pumps, even with positive displacement pumps. Substrates are poured to the reactor and then with a screw pump, they are mixed with anaerobic sludge. The pumped sludge, flowing through the screw pump, is simultaneously heated by a heat exchanger. The level of the fermentation sludge inside the reactor chamber is above the bottom edge of the cover. Cover of the reactor is equipped with the screw pump driver. Inside the reactor, an electric motor is installed that is driving a screw pump. The heated sludge circulates in the digester. The post-fermented sludge is collected using a drain well. The inlet to the drain well is below the level of the sludge in the digester. The biogas is discharged from the reactor by the biogas intake valve located on the cover. The technology is very useful for fermentation of lignocellulosic biomass and substrates with high content of dry mass (organic wastes). The other technology is a reactor for micro-biogas plant with a pressure mixing system. The reactor has a form of plastic or concrete tank with a circular cross-section. The effective mixing of sludge is ensured by profiled at 90° bottom of the tank. Substrates for fermentation are supplied by an inlet well. The inlet well is equipped with a cover that eliminates odour release. The introduction of a new portion of substrates is preceded by pumping of digestate to the disposal well. Optionally, digestate can gravitationally flow to digestate storage tank. The obtained biogas is discharged into the separator. The valve supplies biogas to the blower. The blower presses the biogas from the fermentation chamber in such a way as to facilitate the introduction of a new portion of substrates. Biogas is discharged from the reactor by valve that enables biogas removal but prevents suction from outside the reactor.

Keywords: biogas, digestion, heating system, mixing system

Procedia PDF Downloads 154