Search results for: facial comparison
878 The Use of Random Set Method in Reliability Analysis of Deep Excavations
Authors: Arefeh Arabaninezhad, Ali Fakher
Abstract:
Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty
Procedia PDF Downloads 268877 The Environmental Impacts of Textiles Reuse and Recycling: A Review on Life-Cycle-Assessment Publications
Authors: Samuele Abagnato, Lucia Rigamonti
Abstract:
Life-Cycle-Assessment (LCA) is an effective tool to quantify the environmental impacts of reuse models and recycling technologies for textiles. In this work, publications in the last ten years about LCA on textile waste are classified according to location, goal and scope, functional unit, waste composition, impact assessment method, impact categories, and sensitivity analysis. Twenty papers have been selected: 50% are focused only on recycling, 30% only on reuse, the 15% on both, while only one paper considers only the final disposal of the waste. It is found that reuse is generally the best way to decrease the environmental impacts of textiles waste management because of the avoided impacts of manufacturing a new item. In the comparison between a product made with recycled yarns and a product from virgin materials, in general, the first option is less impact, especially for the categories of climate change, water depletion, and land occupation, while for other categories, such as eutrophication or ecotoxicity, under certain conditions the impacts of the recycled fibres can be higher. Cultivation seems to have quite high impacts when natural fibres are involved, especially in the land use and water depletion categories, while manufacturing requires a remarkable amount of electricity, with its associated impact on climate change. In the analysis of the reuse processes, relevant importance is covered by the laundry phase, with water consumption and impacts related to the use of detergents. About the sensitivity analysis, it can be stated that one of the main variables that influence the LCA results and that needs to be further investigated in the modeling of the LCA system about this topic is the substitution rate between recycled and virgin fibres, that is the amount of recycled material that can be used in place of virgin one. Related to this, also the yield of the recycling processes has a strong influence on the results of the impact. The substitution rate is also important in the modeling of the reuse processes because it represents the number of avoided new items bought in place of the reused ones. Another aspect that appears to have a large influence on the impacts is consumer behaviour during the use phase (for example, the number of uses between two laundry cycles). In conclusion, to have a deeper knowledge of the impacts of a life-cycle approach of textile waste, further data and research are needed in the modeling of the substitution rate and of the use phase habits of the consumers.Keywords: environmental impacts, life-cycle-assessment, textiles recycling, textiles reuse, textiles waste management
Procedia PDF Downloads 89876 Understanding the Nature of Blood Pressure as Metabolic Syndrome Component in Children
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Pediatric overweight and obesity need attention because they may cause morbid obesity, which may develop metabolic syndrome (MetS). Criteria used for the definition of adult MetS cannot be applied for pediatric MetS. Dynamic physiological changes that occur during childhood and adolescence require the evaluation of each parameter based upon age intervals. The aim of this study is to investigate the distribution of blood pressure (BP) values within diverse pediatric age intervals and the possible use and clinical utility of a recently introduced Diagnostic Obesity Notation Model Assessment Tension (DONMA tense) Index derived from systolic BP (SBP) and diastolic BP (DBP) [SBP+DBP/200]. Such a formula may enable a more integrative picture for the assessment of pediatric obesity and MetS due to the use of both SBP and DBP. 554 children, whose ages were between 6-16 years participated in the study; the study population was divided into two groups based upon their ages. The first group comprises 280 cases aged 6-10 years (72-120 months), while those aged 10-16 years (121-192 months) constituted the second group. The values of SBP, DBP and the formula (SBP+DBP/200) covering both were evaluated. Each group was divided into seven subgroups with varying degrees of obesity and MetS criteria. Two clinical definitions of MetS have been described. These groups were MetS3 (children with three major components), and MetS2 (children with two major components). The other groups were morbid obese (MO), obese (OB), overweight (OW), normal (N) and underweight (UW). The children were included into the groups according to the age- and sex-based body mass index (BMI) percentile values tabulated by WHO. Data were evaluated by SPSS version 16 with p < 0.05 as the statistical significance degree. Tension index was evaluated in the groups above and below 10 years of age. This index differed significantly between N and MetS as well as OW and MetS groups (p = 0.001) above 120 months. However, below 120 months, significant differences existed between MetS3 and MetS2 (p = 0.003) as well as MetS3 and MO (p = 0.001). In comparison with the SBP and DBP values, tension index values have enabled more clear-cut separation between the groups. It has been detected that the tension index was capable of discriminating MetS3 from MetS2 in the group, which was composed of children aged 6-10 years. This was not possible in the older group of children. This index was more informative for the first group. This study also confirmed that 130 mm Hg and 85 mm Hg cut-off points for SBP and DBP, respectively, are too high for serving as MetS criteria in children because the mean value for tension index was calculated as 1.00 among MetS children. This finding has shown that much lower cut-off points must be set for SBP and DBP for the diagnosis of pediatric MetS, especially for children under-10 years of age. This index may be recommended to discriminate MO, MetS2 and MetS3 among the 6-10 years of age group, whose MetS diagnosis is problematic.Keywords: blood pressure, children, index, metabolic syndrome, obesity
Procedia PDF Downloads 117875 Weight Loss and Symptom Improvement in Women with Secondary Lymphedema Using Semaglutide
Authors: Shivani Thakur, Jasmin Dominguez Cervantes, Ahmed Zabiba, Fatima Zabiba, Sandhini Agarwal, Kamalpreet Kaur, Hussein Maatouk, Shae Chand, Omar Madriz, Tiffany Huang, Saloni Bansal
Abstract:
The prevalence of lymphedema in women in rural communities highlights the importance of developing effective treatment and prevention methods. Subjects with secondary lymphedema in California’s Central Valley were surveyed at 6 surgical clinics to assess demographics and symptoms of lymphedema. Additionally, subjects on semaglutide treatment for obesity and/or T2DM were monitored for their diabetes management, weight loss progress, and lymphedema symptoms compared to subjects who were not treated with semaglutide. The subjects were followed for 12 months. Subjects who were treated with semaglutide completed pre-treatment questionnaires and follow-up post-treatment questionnaires at 3, 6, 9, 12 months, along with medical assessment. The untreated subjects completed similar questionnaires. The questionnaires investigated subjective feelings regarding lymphedema symptoms and management using a Likert-scale; quantitative leg measurements were collected, and blood work reviewed at these appointments. Paired difference t-tests, chi-squared tests, and independent sample t-tests were performed. 50 subjects, aged 18-75 years, completed the surveys evaluating secondary lymphedema: 90% female, 69% Hispanic, 45% Spanish speaking, 42% disabled, 57 % employed, 54% income range below 30 thousand dollars, and average BMI of 40. Both treatment and non-treatment groups noted the most common symptoms were leg swelling (x̄=3.2, ▁d= 1.3), leg pain (x̄=3.2, ▁d=1.6 ), loss of daily function (x̄=3, ▁d=1.4 ), and negative body image (x̄=4.4, ▁d=0.54). Subjects in the semaglutide treatment group >3 months of treatment compared to the untreated group demonstrated: 55% subject in the treated group had a 10% weight loss vs 3% in the untreated group (average BMI reduction by 11% vs untreated by 2.5%, p<0.05) and improved subjective feelings about their lymphedema symptoms: leg swelling (x̄=2.4, ▁d=0.45 vs x̄=3.2, ▁d=1.3, p<0.05), leg pain (x̄=2.2, ▁d=0.45 vs x̄= 3.2, ▁d= 1.6, p<0.05), and heaviness (x̄=2.2, ▁d=0.45 vs x̄=3, ▁d=1.56, p<0.05). Improvement in diabetes management was demonstrated by an average of 0.9 % decrease in A1C values compared to untreated 0.1 %, p<0.05. In comparison to untreated subjects, treatment subjects on semaglutide noted 6 cm decrease in the circumference of the leg, knee, calf, and ankle compared to 2 cm in untreated subjects, p<0.05. Semaglutide was shown to significantly improve weight loss, T2DM management, leg circumference, and secondary lymphedema functional, physical and psychosocial symptoms.Keywords: diabetes, secondary lymphedema, semaglutide, obesity
Procedia PDF Downloads 61874 Comparison of Spiral Circular Coil and Helical Coil Structures for Wireless Power Transfer System
Authors: Zhang Kehan, Du Luona
Abstract:
Wireless power transfer (WPT) systems have been widely investigated for advantages of convenience and safety compared to traditional plug-in charging systems. The research contents include impedance matching, circuit topology, transfer distance et al. for improving the efficiency of WPT system, which is a decisive factor in the practical application. What is more, coil structures such as spiral circular coil and helical coil with variable distance between two turns also have indispensable effects on the efficiency of WPT systems. This paper compares the efficiency of WPT systems utilizing spiral or helical coil with variable distance between two turns, and experimental results show that efficiency of spiral circular coil with an optimum distance between two turns is the highest. According to efficiency formula of resonant WPT system with series-series topology, we introduce M²/R₋₁ to measure the efficiency of spiral circular coil and helical coil WPT system. If the distance between two turns s is too close, proximity effect theory shows that the induced current in the conductor, caused by a variable flux created by the current flows in the skin of vicinity conductor, is the opposite direction of source current and has assignable impart on coil resistance. Thus in two coil structures, s affects coil resistance. At the same time, when the distance between primary and secondary coils is not variable, s can also make the influence on M to some degrees. The aforementioned study proves that s plays an indispensable role in changing M²/R₋₁ and then can be adjusted to find the optimum value with which WPT system achieves the highest efficiency. In actual application situations of WPT systems especially in underwater vehicles, miniaturization is one vital issue in designing WPT system structures. Limited by system size, the largest external radius of spiral circular coil is 100 mm, and the largest height of helical coil is 40 mm. In other words, the turn of coil N changes with s. In spiral circular and helical structures, the distance between each two turns in secondary coil is set as a constant value 1 mm to guarantee that the R2 is not variable. Based on the analysis above, we set up spiral circular coil and helical coil model using COMSOL to analyze the value of M²/R₋₁ when the distance between each two turns in primary coil sp varies from 0 mm to 10 mm. In the two structure models, the distance between primary and secondary coils is 50 mm and wire diameter is chosen as 1.5 mm. The turn of coil in secondary coil are 27 in helical coil model and 20 in spiral circular coil model. The best value of s in helical coil structure and spiral circular coil structure are 1 mm and 2 mm respectively, in which the value of M²/R₋₁ is the largest. It is obviously to select spiral circular coil as the first choice to design the WPT system for that the value of M²/R₋₁ in spiral circular coil is larger than that in helical coil under the same condition.Keywords: distance between two turns, helical coil, spiral circular coil, wireless power transfer
Procedia PDF Downloads 345873 Characterization of the Blood Microbiome in Rheumatoid Arthritis Patients Compared to Healthy Control Subjects Using V4 Region 16S rRNA Sequencing
Authors: D. Hammad, D. P. Tonge
Abstract:
Rheumatoid arthritis (RA) is a disabling and common autoimmune disease during which the body's immune system attacks healthy tissues. This results in complicated and long-lasting actions being carried out by the immune system, which typically only occurs when the immune system encounters a foreign object. In the case of RA, the disease affects millions of people and causes joint inflammation, ultimately leading to the destruction of cartilage and bone. Interestingly, the disease mechanism still remains unclear. It is likely that RA occurs as a result of a complex interplay of genetic and environmental factors including an imbalance in the microorganism population inside our body. The human microbiome or microbiota is an extensive community of microorganisms in and on the bodies of animals, which comprises bacteria, fungi, viruses, and protozoa. Recently, the development of molecular techniques to characterize entire bacterial communities has renewed interest in the involvement of the microbiome in the development and progression of RA. We believe that an imbalance in some of the specific bacterial species in the gut, mouth and other sites may lead to atopobiosis; the translocation of these organisms into the blood, and that this may lead to changes in immune system status. The aim of this study was, therefore, to characterize the microbiome of RA serum samples in comparison to healthy control subjects using 16S rRNA gene amplification and sequencing. Serum samples were obtained from healthy control volunteers and from patients with RA both prior to, and following treatment. The bacterial community present in each sample was identified utilizing V4 region 16S rRNA amplification and sequencing. Bacterial identification, to the lowest taxonomic rank, was performed using a range of bioinformatics tools. Significantly, the proportions of the Lachnospiraceae, Ruminococcaceae, and Halmonadaceae families were significantly increased in the serum of RA patients compared with healthy control serum. Furthermore, the abundance of Bacteroides and Lachnospiraceae nk4a136_group, Lachnospiraceae_UGC-001, RuminococcaceaeUCG-014, Rumnococcus-1, and Shewanella was also raised in the serum of RA patients relative to healthy control serum. These data support the notion of a blood microbiome and reveal RA-associated changes that may have significant implications for biomarker development and may present much-needed opportunities for novel therapeutic development.Keywords: blood microbiome, gut and oral bacteria, Rheumatoid arthritis, 16S rRNA gene sequencing
Procedia PDF Downloads 132872 Optical and Surface Characteristics of Direct Composite, Polished and Glazed Ceramic Materials After Exposure to Tooth Brush Abrasion and Staining Solution
Authors: Maryam Firouzmandi, Moosa Miri
Abstract:
Aim and background: esthetic and structural reconstruction of anterior teeth may require the application of different restoration material. In this regard combination of direct composite veneer and ceramic crown is a common treatment option. Despite the initial matching, their long term harmony in term of optical and surface characteristics is a matter of concern. The purpose of this study is to evaluate and compare optical and surface characteristic of direct composite polished and glazed ceramic materials after exposure to tooth brush abrasion and staining solution. Materials and Methods: ten 2 mm thick disk shape specimens were prepared from IPS empress direct composite and twenty specimens from IPS e.max CAD blocks. Composite specimens and ten ceramic specimens were polished by using D&Z composite and ceramic polishing kit. The other ten specimens of ceramic were glazed with glazing liquid. Baseline measurement of roughness, CIElab coordinate, and luminance were recorded. Then the specimens underwent thermocycling, tooth brushing, and coffee staining. Afterword, the final measurements were recorded. Color coordinate were used to calculate ΔE76, ΔE00, translucency parameter, and contrast ratio. Data were analyzed by One-way ANOVA and post hoc LSD test. Results: baseline and final roughness of the study group were not different. At baseline, the order of roughness for the study group were as follows: composite < glazed ceramic < polished ceramic, but after aging, no difference. Between ceramic groups was not detected. The comparison of baseline and final luminance was similar to roughness but in reverse order. Unlike differential roughness which was comparable between the groups, changes in luminance of the glazed ceramic group was higher than other groups. ΔE76 and ΔE00 in the composite group were 18.35 and 12.84, in the glazed ceramic group were 1.3 and 0.79, and in polished ceramic were 1.26 and 0.85. These values for the composite group were significantly different from ceramic groups. Translucency of composite at baseline was significantly higher than final, but there was no significant difference between these values in ceramic groups. Composite was more translucency than ceramic at baseline and final measurement. Conclusion: Glazed ceramic surface was smoother than polished ceramic. Aging did not change the roughness. Optical properties (color and translucency) of the composite were influenced by aging. Luminance of composite, glazed ceramic, and polished ceramic decreased after aging, but the reduction in glazed ceramic was more pronounced.Keywords: ceramic, tooth-brush abrasion, staining solution, composite resin
Procedia PDF Downloads 185871 Formulation of Value Added Beff Meatballs with the Addition of Pomegranate (Punica granatum) Extract as a Source of Natural Antioxident
Authors: M. A. Hashem, I. Jahan
Abstract:
The experiment was conducted to find out the effect of different levels of Pomegranate (Punica granatum) extract and synthetic antioxidant BHA (Beta Hydroxyl Anisole) on fresh and preserved beef meatballs in order to make functional food. For this purpose, ground beef samples were divided into five treatment groups. They were treated as control group, 0.1% synthetic antioxidant group, 0.1%, 0.2% and 0.3% pomegranate extract group as T1, T2, T3, T4 and T5 respectively. Proximate analysis, sensory tests (color, flavor, tenderness, juiciness, overall acceptability), cooking loss, pH value, free fatty acids (FFA), thiobarbituric acid values (TBARS), peroxide value (POV) and microbiological examination were determined in order to evaluate the effect of pomegranate extract as natural antioxidant and antimicrobial activities compared to BHA (Beta Hydroxyl Anisole) at first day before freezing and for maintaining meatballs qualities on the shelf life of beef meat balls stored for 60 days under frozen condition. Freezing temperature was -20˚C. Days of intervals of experiment were on 0, 15th, 30th and 60th days. Dry matter content of all the treatment groups differ significantly (p<0.05). On the contrary, DM content increased significantly (p<0.05) with the advancement of different days of intervals. CP content of all the treatments were increased significantly (p<0.05) among the different treatment groups. EE and Ash content were decreased significantly (p<0.05) at different treatment levels. FFA values, TBARS, POV were decreased significantly (p<0.05) at different treatment levels. Color, odor, tenderness, juiciness, overall acceptability decreased significantly (p<0.05) at different days of intervals. Raw PH, cooked pH were increased at different treatment levels significantly (p<0.05). The cooking loss (%) at different treatment levels were differ significantly (p<0.05). TVC (logCFU/g), TCC (logCFU/g) and TYMC (logCFU/g) was decreased significantly (p<0.05) at different treatment levels and at different days of intervals comparison to control. Considering CP, tenderness, juiciness, overall acceptability, cooking loss, FFA, POV, TBARS value and microbial analysis it can be concluded that pomegranate extract at 0.1%, 0.2% and 0.3% can be used instead of synthetic antioxidant BHA in beef meatballs. On the basis of sensory evaluation, nutrient quality, physicochemical properties, biochemical analysis and microbial analysis 0.3% Pomegranate extract can be recommended for formulation of value added beef meatball enriched with natural antioxidant.Keywords: antioxidant, pomegranate, BHA, value added meat products
Procedia PDF Downloads 246870 Numerical Board Game for Low-Income Preschoolers
Authors: Gozde Inal Kiziltepe, Ozgun Uyanik
Abstract:
There is growing evidence that socioeconomic (SES)-related differences in mathematical knowledge primarily start in early childhood period. Preschoolers from low-income families are likely to perform substantially worse in mathematical knowledge than their counterparts from middle and higher income families. The differences are seen on a wide range of recognizing written numerals, counting, adding and subtracting, and comparing numerical magnitudes. Early differences in numerical knowledge have a permanent effect childrens’ mathematical knowledge in other grades. In this respect, analyzing the effect of number board game on the number knowledge of 48-60 month-old children from disadvantaged low-income families constitutes the main objective of the study. Participants were the 71 preschoolers from a childcare center which served low-income urban families. Children were randomly assigned to the number board condition or to the color board condition. The number board condition included 35 children and the color board game condition included 36 children. Both board games were 50 cm long and 30 cm high; had ‘The Great Race’ written across the top; and included 11 horizontally arranged, different colored squares of equal sizes with the leftmost square labeled ‘Start’. The numerical board had the numbers 1–10 in the rightmost 10 squares; the color board had different colors in those squares. A rabbit or a bear token were presented to children for selecting, and on each trial spun a spinner to determine whether the token would move one or two spaces. The number condition spinner had a ‘1’ half and a ‘2’ half; the color condition spinner had colors that matched the colors of the squares on the board. Children met one-on-one with an experimenter for four 15- to 20-min sessions within a 2-week period. In the first and fourth sessions, children were administered identical pretest and posttest measures of numerical knowledge. All children were presented three numerical tasks and one subtest presented in the following order: counting, numerical magnitude comparison, numerical identification and Count Objects – Circle Number Probe subtest of Early Numeracy Assessment. In addition, same numerical tasks and subtest were given as a follow-up test four weeks after the post-test administration. Findings obtained from the study; showed that there was a meaningful difference between scores of children who played a color board game in favor of children who played number board game.Keywords: low income, numerical board game, numerical knowledge, preschool education
Procedia PDF Downloads 353869 Evaluation of Teaching Team Stress Factors in Two Engineering Education Programs
Authors: Kari Bjorn
Abstract:
Team learning has been studied and modeled as double loop model and its variations. Also, metacognition has been suggested as a concept to describe the nature of team learning to be more than a simple sum of individual learning of the team members. Team learning has a positive correlation with both individual motivation of its members, as well as the collective factors within the team. Team learning of previously very independent members of two teaching teams is analyzed. Applied Science Universities are training future professionals with ever more diversified and multidisciplinary skills. The size of the units of teaching and learning are increasingly larger for several reasons. First, multi-disciplinary skill development requires more active learning and richer learning environments and learning experiences. This occurs on students teams. Secondly, teaching of multidisciplinary skills requires a multidisciplinary and team-based teaching from the teachers as well. Team formation phases have been identifies and widely accepted. Team role stress has been analyzed in project teams. Projects typically have a well-defined goal and organization. This paper explores team stress of two teacher teams in a parallel running two course units in engineering education. The first is an Industrial Automation Technology and the second is Development of Medical Devices. The courses have a separate student group, and they are in different campuses. Both are run in parallel within 8 week time. Both of them are taught by a group of four teachers with several years of teaching experience, but individually. The team role stress scale items - the survey is done to both teaching groups at the beginning of the course and at the end of the course. The inventory of questions covers the factors of ambiguity, conflict, quantitative role overload and qualitative role overload. Some comparison to the study on project teams can be drawn. Team development stage of the two teaching groups is different. Relating the team role stress factors to the development stage of the group can reveal the potential of management actions to promote team building and to understand the maturity of functional and well-established teams. Mature teams indicate higher job satisfaction and deliver higher performance. Especially, teaching teams who deliver highly intangible results of learning outcome are sensitive to issues in the job satisfaction and team conflicts. Because team teaching is increasing, the paper provides a review of the relevant theories and initial comparative and longitudinal results of the team role stress factors applied to teaching teams.Keywords: engineering education, stress, team role, team teaching
Procedia PDF Downloads 225868 The Textual Criticism on the Age of ‘Wan Li’ Shipwreck Porcelain and Its Comparison with ‘Whitte Leeuw’ and Hatcher Shipwreck Porcelain
Authors: Yang Liu, Dongliang Lyu
Abstract:
After the Wan li shipwreck was discovered 60 miles off the east coast of Tan jong Jara in Malaysia, numerous marvelous ceramic shards have been salvaged from the seabed. Remarkable pieces of Jing dezhen blue-and-white porcelain recovered from the site represent the essential part of the fascinating research. The porcelain cargo of Wan li shipwreck is significant to the studies on exported porcelains and Jing dezhen porcelain manufacture industry of Late-Ming dynasty. Using the ceramic shards categorization and the study of the Chinese and Western historical documents as a research strategy, the paper wants to shed new light on the Wan li shipwreck wares classification with Jingdezhen kiln ceramic as its main focus. The article is also discussing Jing dezhen blue-and-white porcelains from the perspective of domestic versus export markets and further proceeding to the systematization and analyses of Wan li shipwreck porcelain which bears witness to the forms, styles, and types of decoration that were being traded in this period. The porcelain data from two other shipwrecked projects -White Leeuw and Hatcher- were chosen as comparative case studies and Wan li shipwreck Jing dezhen blue-and-white porcelain is being reinterpreted in the context of art history and archeology of the region. The marine archaeologist Sten Sjostrand named the ship ‘Wanli shipwreck’ because its porcelain cargoes are typical of those made during the reign of Emperor Wan li of Ming dynasty. Though some scholars question the appropriateness of the name, the final verdict of the history is still to be made. Based on previous historical argumentation, the article uses a comparative approach to review the Wan li shipwreck blue-and-white porcelains on the grounds of the porcelains unearthed from the tomb or abandoned in the towns and carrying the time-specific reign mark. All these materials provide a very strong evidence which suggests that the porcelain recovered from Wan li ship can be dated to as early as the second year of Tianqi era (1622) and early Chongzhen reign. Lastly, some blue-and-white porcelain intended for the domestic market and some bowls of blue-and-white porcelain from Jing dezhen kilns recovered from the Wan li shipwreck all carry at the bottom the specific residue from the firing process. The author makes the corresponding analysis for these two interesting phenomena.Keywords: blue-and-white porcelain, Ming dynasty, Jing dezhen kiln, Wan li shipwreck
Procedia PDF Downloads 189867 Creatine Associated with Resistance Training Increases Muscle Mass in the Elderly
Authors: Camila Lemos Pinto, Juliana Alves Carneiro, Patrícia Borges Botelho, João Felipe Mota
Abstract:
Sarcopenia, a syndrome characterized by progressive and generalized loss of skeletal muscle mass and strength, currently affects over 50 million people and increases the risk of adverse outcomes such as physical disability, poor quality of life and death. The aim of this study was to examine the efficacy of creatine supplementation associated with resistance training on muscle mass in the elderly. A 12-week, double blind, randomized, parallel group, placebo controlled trial was conducted. Participants were randomly allocated into one of the following groups: placebo with resistance training (PL+RT, n=14) and creatine supplementation with resistance training (CR + RT, n=13). The subjects from CR+RT group received 5 g/day of creatine monohydrate and the subjects from the PL+RT group were given the same dose of maltodextrin. Participants were instructed to ingest the supplement on non-training days immediately after lunch and on training days immediately after resistance training sessions dissolved in a beverage comprising 100 g of maltodextrin lemon flavored. Participants of both groups undertook a supervised exercise training program for 12 weeks (3 times per week). The subjects were assessed at baseline and after 12 weeks. The primary outcome was muscle mass, assessed by dual energy X-ray absorptiometry (DXA). The secondary outcome included diagnose participants with one of the three stages of sarcopenia (presarcopenia, sarcopenia and severe sarcopenia) by skeletal muscle mass index (SMI), handgrip strength and gait speed. CR+RT group had a significant increase in SMI and muscle (p<0.0001), a significant decrease in android and gynoid fat (p = 0.028 and p=0.035, respectively) and a tendency of decreasing in body fat (p=0.053) after the intervention. PL+RT only had a significant increase in SMI (p=0.007). The main finding of this clinical trial indicated that creatine supplementation combined with resistance training was capable of increasing muscle mass in our elderly cohort (p=0.02). In addition, the number of subjects diagnosed with one of the three stages of sarcopenia at baseline decreased in the creatine supplemented group in comparison with the placebo group (CR+RT, n=-3; PL+RT, n=0). In summary, 12 weeks of creatine supplementation associated with resistance training resulted in increases in muscle mass. This is the first research with elderly of both sexes that show the same increase in muscle mass with a minor quantity of creatine supplementation in a short period. Future long-term research should investigate the effects of these interventions in sarcopenic elderly.Keywords: creatine, dietetic supplement, elderly, resistance training
Procedia PDF Downloads 474866 Comparison of EMG Normalization Techniques Recommended for Back Muscles Used in Ergonomics Research
Authors: Saif Al-Qaisi, Alif Saba
Abstract:
Normalization of electromyography (EMG) data in ergonomics research is a prerequisite for interpreting the data. Normalizing accounts for variability in the data due to differences in participants’ physical characteristics, electrode placement protocols, time of day, and other nuisance factors. Typically, normalized data is reported as a percentage of the muscle’s isometric maximum voluntary contraction (%MVC). Various MVC techniques have been recommended in the literature for normalizing EMG activity of back muscles. This research tests and compares the recommended MVC techniques in the literature for three back muscles commonly used in ergonomics research, which are the lumbar erector spinae (LES), latissimus dorsi (LD), and thoracic erector spinae (TES). Six healthy males from a university population participated in this research. Five different MVC exercises were compared for each muscle using the Tringo wireless EMG system (Delsys Inc.). Since the LES and TES share similar functions in controlling trunk movements, their MVC exercises were the same, which included trunk extension at -60°, trunk extension at 0°, trunk extension while standing, hip extension, and the arch test. The MVC exercises identified in the literature for the LD were chest-supported shoulder extension, prone shoulder extension, lat-pull down, internal shoulder rotation, and abducted shoulder flexion. The maximum EMG signal was recorded during each MVC trial, and then the averages were computed across participants. A one-way analysis of variance (ANOVA) was utilized to determine the effect of MVC technique on muscle activity. Post-hoc analyses were performed using the Tukey test. The MVC technique effect was statistically significant for each of the muscles (p < 0.05); however, a larger sample of participants was needed to detect significant differences in the Tukey tests. The arch test was associated with the highest EMG average at the LES, and also it resulted in the maximum EMG activity more often than the other techniques (three out of six participants). For the TES, trunk extension at 0° was associated with the largest EMG average, and it resulted in the maximum EMG activity the most often (three out of six participants). For the LD, participants obtained their maximum EMG either from chest-supported shoulder extension (three out of six participants) or prone shoulder extension (three out of six participants). Chest-supported shoulder extension, however, had a larger average than prone shoulder extension (0.263 and 0.240, respectively). Although all the aforementioned techniques were superior in their averages, they did not always result in the maximum EMG activity. If an accurate estimate of the true MVC is desired, more than one technique may have to be performed. This research provides additional MVC techniques for each muscle that may elicit the maximum EMG activity.Keywords: electromyography, maximum voluntary contraction, normalization, physical ergonomics
Procedia PDF Downloads 193865 Comparison of Cardiovascular and Metabolic Responses Following In-Water and On-Land Jump in Postmenopausal Women
Authors: Kuei-Yu Chien, Nai-Wen Kan, Wan-Chun Wu, Guo-Dong Ma, Shu-Chen Chen
Abstract:
Purpose: The purpose of this study was to investigate the responses of systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR), rating of perceived exertion (RPE) and lactate following continued high-intensity interval exercise in water and on land. The results of studies can be an exercise program design reference for health care and fitness professionals. Method: A total of 20 volunteer postmenopausal women was included in this study. The inclusion criteria were: duration of menopause > 1 year; and sedentary lifestyle, defined as engaging in moderate-intensity exercise less than three times per week, or less than 20 minutes per day. Participants need to visit experimental place three times. The first time visiting, body composition was performed and participant filled out the questionnaire. Participants were assigned randomly to the exercise environment (water or land) in second and third time visiting. Water exercise testing was under water of trochanter level. In continuing jump testing, each movement consisted 10-second maximum volunteer jump for two sets. 50% heart rate reserve dynamic resting (walking or running) for one minute was within each set. SBP, DBP, HR, RPE of whole body/thigh (RPEW/RPET) and lactate were performed at pre and post testing. HR, RPEW, and RPET were monitored after 1, 2, and 10 min of exercise testing. SBP and DBP were performed after 10 and 30 min of exercise testing. Results: The responses of SBP and DBP after exercise testing in water were higher than those on land. Lactate levels after exercise testing in water were lower than those on land. The responses of RPET were lower than those on land post exercise 1 and 2 minutes. The heart rate recovery in water was faster than those on land at post exercise 5 minutes. Conclusion: This study showed water interval jump exercise induces higher cardiovascular responses with lower RPE responses and lactate levels than on-land jumps exercise in postmenopausal women. Fatigue is one of the major reasons to obstruct exercise behavior. Jump exercise could enhance cardiorespiratory fitness, the lower-extremity power, strength, and bone mass. There are several health benefits to the middle to older adults. This study showed that water interval jumping could be more relaxed and not tried to reach the same land-based cardiorespiratory exercise intensity.Keywords: interval exercise, power, recovery, fatigue
Procedia PDF Downloads 408864 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English
Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista
Abstract:
The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.Keywords: corpus linguistics, historical linguistics, old English, parallel corpus
Procedia PDF Downloads 212863 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap
Authors: Nikolai N. Bogolubov, Andrey V. Soldatov
Abstract:
Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom
Procedia PDF Downloads 271862 Impact of Urban Densification on Travel Behaviour: Case of Surat and Udaipur, India
Authors: Darshini Mahadevia, Kanika Gounder, Saumya Lathia
Abstract:
Cities, an outcome of natural growth and migration, are ever-expanding due to urban sprawl. In the Global South, urban areas are experiencing a switch from public transport to private vehicles, coupled with intensified urban agglomeration, leading to frequent longer commutes by automobiles. This increase in travel distance and motorized vehicle kilometres lead to unsustainable cities. To achieve the nationally pledged GHG emission mitigation goal, the government is prioritizing a modal shift to low-carbon transport modes like mass transit and paratransit. Mixed land-use and urban densification are crucial for the economic viability of these projects. Informed by desktop assessment of mobility plans and in-person primary surveys, the paper explores the challenges around urban densification and travel patterns in two Indian cities of contrasting nature- Surat, a metropolitan industrial city with a 5.9 million population and a very compact urban form, and Udaipur, a heritage city attracting large international tourists’ footfall, with limited scope for further densification. Dense, mixed-use urban areas often improve access to basic services and economic opportunities by reducing distances and enabling people who don't own personal vehicles to reach them on foot/ cycle. But residents travelling on different modes end up contributing to similar trip lengths, highlighting the non-uniform distribution of land-uses and lack of planned transport infrastructure in the city and the urban-peri urban networks. Additionally, it is imperative to manage these densities to reduce negative externalities like congestion, air/noise pollution, lack of public spaces, loss of livelihood, etc. The study presents a comparison of the relationship between transport systems with the built form in both cities. The paper concludes with recommendations for managing densities in urban areas along with promoting low-carbon transport choices like improved non-motorized transport and public transport infrastructure and minimizing personal vehicle usage in the Global South.Keywords: India, low-carbon transport, travel behaviour, trip length, urban densification
Procedia PDF Downloads 216861 Effects of Lipoic Acid Supplementation on Activities of Cyclooxygenases and Levels of Prostaglandins E2 and F2 Alpha Metabolites in the Offspring of Rats with Streptozocin-Induced Diabetes
Authors: H. Y. Al-Matubsi, G. A. Oriquat, M. Abu-Samak, O. A. Al Hanbali, M. Salim
Abstract:
Background: Uncontrolled diabetes mellitus (DM) is an etiological factor for recurrent pregnancy loss and major congenital malformations in the offspring. Antioxidant therapy has been advocated to overcome the oxidant-antioxidant disequilibrium inherent in diabetes. The aims of this study were to evaluate the protective effect of lipoic acid (LA) on fetal outcome and to elucidate changes that may be involved in the mechanism(s) implicit diabetic fetopathy. Methods: Female rats were rendered hyperglycemic using streptozocin and then mated with normal male rat. Pregnant non-diabetic (group1; n=9; and group2; n=7) or pregnant diabetic (group 3; n=10; and group 4; n=8) rats were treated daily with either lipoic acid (LA) (30 mg/kg body weight; groups 2 and 4) or vehicle (groups 1 and 3) between gestational days 0 and 15. On day 15 of gestation, the rats were sacrificed, and the fetuses, placentas and membranes dissected out of the uterine horns. Following morphological examination, the fetuses, placentas and membranes were homogenized, and used to measure cyclooxygenases (COX) activities and metabolisms of prostaglandin (PG) E2 (PGEM) and PGF2 (PGFM) levels. Maternal liver and plasma total glutathione levels were also determined. Results: Supplementation of diabetic rats with LA was found to significantly (P<0.05) reduce resorption rates in diabetic rats and increased mean fetal weight compared to diabetic group. Treatment of diabetic rats with LA leads to a significant (P<0.05) increase in liver and plasma total glutathione, in comparison with diabetic rats. Decreased levels of PGEM and elevated levels of PGFM in the fetuses, placentas and membranes were characteristic of experimental diabetic gestation associated with malformation. LA treatment to diabetic mothers failed to normalize levels of PGEM to the non-diabetic control rats. However, the levels of PGEM in malformed fetuses from LA-treated diabetic mothers was significantly (P < 0.05) higher than those in malformed fetuses from diabetic rats. Conclusions: We conclude that LA can reduce congenital malformations in the offspring of diabetic rats at day 15 of gestation. However, LA treatment did not completely prevent the occurrence of malformations, other factors, such as arachidonic acid deficiency and altered prostaglandin metabolismmay be involved in the pathogenesis of diabetes-induced congenital malformations.Keywords: diabetes, lipoic acid, pregnancy, prostaglandins
Procedia PDF Downloads 262860 Applying Organic Natural Fertilizer to 'Orange Rubis' and 'Farbaly' Apricot Growth, Yield and Fruit Quality
Authors: A. Tarantino, F. Lops, G. Lopriore, G. Disciglio
Abstract:
Biostimulants are known as the organic fertilizers that can be applied in agriculture in order to increase nutrient uptake, growth and development of plants and improve quality, productivity and the environmental positive impacts. The aim of this study was to test the effects of some commercial biostimulants products (Bion® 50 WG, Hendophyt ® PS, Ergostim® XL and Radicon®) on vegeto-productive behavior and qualitative characteristics of fruits of two emerging apricot cultivars (Orange Rubis® and Farbaly®). The study was conducted during the spring-summer season 2015, in a commercial orchard located in the agricultural area of Cerignola (Foggia district, Apulian region, Southern Italy). Eight years old apricot trees, cv ‘Orange Rubis’ and ‘Farbaly®’, were used. The experimental data recorded during the experimental trial were: shoot length, total number of flower buds, flower buds drop and time of flowering and fruit set. Total yield of fruits per tree and quality parameters were determined. Experimental data showed some specific differences among the biostimulant treatments. Concerning the yield of ‘Orange Rubis’, except for the Bion treatment, the other three biostimulant treatments showed a tendentially lower values than the control. The yield of ‘Farbaly’ was lower for the Bion and Hendophyt treatments, higher for the Ergostim treatment, when compared with the yield of the control untreated. Concerning the soluble solids content, the juice of ‘Farbaly’ fruits had always higher content than that of ‘Orange Rubis’. Particularly, the Bion and the Hendophyt treatments showed in both harvest values tendentially higher than the control. Differently, the four biostimulant treatments did not affect significantly this parameter in ‘Orange Rubis’. With regard to the fruit firmness, some differences were observed between the two harvest dates and among the four biostimulant treatments. At the first harvest date, ‘Orange Rubis’ treated with Bion and Hendophyt biostimulants showed texture values tendentially lower than the control. Instead, ‘Farbaly’ for all the biostimulant treatments showed fruit firmness values significantly lower than the control. At the second harvest, almost all the biostimulants treatments in both ‘Orange Rubis’ and ‘Farbaly’ cultivar showed values lower than the control. Only ‘Farbaly’ treated with Radicon showed higher value in comparison to the control.Keywords: apricot, fruit quality, growth, organic natural fertilizer
Procedia PDF Downloads 326859 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 63858 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback
Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu
Abstract:
With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.Keywords: input performance, mobile device, slim keyboard, tactile feedback
Procedia PDF Downloads 299857 The Effect of Filter Design and Face Velocity on Air Filter Performance
Authors: Iyad Al-Attar
Abstract:
Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop
Procedia PDF Downloads 135856 Leadership and Corporate Social Responsibility: The Role of Spiritual Intelligence
Authors: Meghan E. Murray, Carri R. Tolmie
Abstract:
This study aims to identify potential factors and widely applicable best practices that can contribute to improving corporate social responsibility (CSR) and corporate performance for firms by exploring the relationship between transformational leadership, spiritual intelligence, and emotional intelligence. Corporate social responsibility is when companies are cognizant of the impact of their actions on the economy, their communities, the environment, and the world as a whole while executing business practices accordingly. The prevalence of CSR has continuously strengthened over the past few years and is now a common practice in the business world, with such efforts coinciding with what stakeholders and the public now expect from corporations. Because of this, it is extremely important to be able to pinpoint factors and best practices that can improve CSR within corporations. One potential factor that may lead to improved CSR is spiritual intelligence (SQ), or the ability to recognize and live with a purpose larger than oneself. Spiritual intelligence is a measurable skill, just like emotional intelligence (EQ), and can be improved through purposeful and targeted coaching. This research project consists of two studies. Study 1 is a case study comparison of a benefit corporation and a non-benefit corporation. This study will examine the role of SQ and EQ as moderators in the relationship between the transformational leadership of employees within each company and the perception of each firm’s CSR and corporate performance. Project methodology includes creating and administering a survey comprised of multiple pre-established scales on transformational leadership, spiritual intelligence, emotional intelligence, CSR, and corporate performance. Multiple regression analysis will be used to extract significant findings from the collected data. Study 2 will dive deeper into spiritual intelligence itself by analyzing pre-existing data and identifying key relationships that may provide value to companies and their stakeholders. This will be done by performing multiple regression analysis on anonymized data provided by Deep Change, a company that has created an advanced, proprietary system to measure spiritual intelligence. Based on the results of both studies, this research aims to uncover best practices, including the unique contribution of spiritual intelligence, that can be utilized by organizations to help enhance their corporate social responsibility. If it is found that high spiritual and emotional intelligence can positively impact CSR effort, then corporations will have a tangible way to enhance their CSR: providing targeted employees with training and coaching to increase their SQ and EQ.Keywords: corporate social responsibility, CSR, corporate performance, emotional intelligence, EQ, spiritual intelligence, SQ, transformational leadership
Procedia PDF Downloads 127855 Overview of Environmental and Economic Theories of the Impact of Dams in Different Regions
Authors: Ariadne Katsouras, Andrea Chareunsy
Abstract:
The number of large hydroelectric dams in the world has increased from almost 6,000 in the 1950s to over 45,000 in 2000. Dams are often built to increase the economic development of a country. This can occur in several ways. Large dams take many years to build so the construction process employs many people for a long time and that increased production and income can flow on into other sectors of the economy. Additionally, the provision of electricity can help raise people’s living standards and if the electricity is sold to another country then the money can be used to provide other public goods for the residents of the country that own the dam. Dams are also built to control flooding and provide irrigation water. Most dams are of these types. This paper will give an overview of the environmental and economic theories of the impact of dams in different regions of the world. There is a difference in the degree of environmental and economic impacts due to the varying climates and varying social and political factors of the regions. Production of greenhouse gases from the dam’s reservoir, for instance, tends to be higher in tropical areas as opposed to Nordic environments. However, there are also common impacts due to construction of the dam itself, such as, flooding of land for the creation of the reservoir and displacement of local populations. Economically, the local population tends to benefit least from the construction of the dam. Additionally, if a foreign company owns the dam or the government subsidises the cost of electricity to businesses, then the funds from electricity production do not benefit the residents of the country the dam is built in. So, in the end, the dams can benefit a country economically, but the varying factors related to its construction and how these are dealt with, determine the level of benefit, if any, of the dam. Some of the theories or practices used to evaluate the potential value of a dam include cost-benefit analysis, environmental impacts assessments and regressions. Systems analysis is also a useful method. While these theories have value, there are also possible shortcomings. Cost-benefit analysis converts all the costs and benefits to dollar values, which can be problematic. Environmental impact assessments, likewise, can be incomplete, especially if the assessment does not include feedback effects, that is, they only consider the initial impact. Finally, regression analysis is dependent on the available data and again would not necessarily include feedbacks. Systems analysis is a method that can allow more complex modelling of the environment and the economic system. It would allow a clearer picture to emerge of the impacts and can include a long time frame.Keywords: comparison, economics, environment, hydroelectric dams
Procedia PDF Downloads 197854 Spatio-Temporal Dynamics of Snow Cover and Melt/Freeze Conditions in Indian Himalayas
Authors: Rajashree Bothale, Venkateswara Rao
Abstract:
Indian Himalayas also known as third pole with 0.9 Million SQ km area, contain the largest reserve of ice and snow outside poles and affect global climate and water availability in the perennial rivers. The variations in the extent of snow are indicative of climate change. The snow melt is sensitive to climate change (warming) and also an influencing factor to the climate change. A study of the spatio-temporal dynamics of snow cover and melt/freeze conditions is carried out using space based observations in visible and microwave bands. An analysis period of 2003 to 2015 is selected to identify and map the changes and trend in snow cover using Indian Remote Sensing (IRS) Advanced Wide Field Sensor (AWiFS) and Moderate Resolution Imaging Spectroradiometer(MODIS) data. For mapping of wet snow, microwave data is used, which is sensitive to the presence of liquid water in the snow. The present study uses Ku-band scatterometer data from QuikSCAT and Oceansat satellites. The enhanced resolution images at 2.25 km from the 13.6GHz sensor are used to analyze the backscatter response to dry and wet snow for the period of 2000-2013 using threshold method. The study area is divided into three major river basins namely Brahmaputra, Ganges and Indus which also represent the diversification in Himalayas as the Eastern Himalayas, Central Himalayas and Western Himalayas. Topographic variations across different zones show that a majority of the study area lies in 4000–5500 m elevation range and the maximum percent of high elevated areas (>5500 m) lies in Western Himalayas. The effect of climate change could be seen in the extent of snow cover and also on the melt/freeze status in different parts of Himalayas. Melt onset day increases from east (March11+11) to west (May12+15) with large variation in number of melt days. Western Himalayas has shorter melt duration (120+15) in comparison to Eastern Himalayas (150+16) providing lesser time for melt. Eastern Himalaya glaciers are prone for enhanced melt due to large melt duration. The extent of snow cover coupled with the status of melt/freeze indicating solar radiation can be used as precursor for monsoon prediction.Keywords: Indian Himalaya, Scatterometer, Snow Melt/Freeze, AWiFS, Cryosphere
Procedia PDF Downloads 260853 Effects of Group Cognitive Restructuring and Rational Emotive Behavioral Therapy on Psychological Distress of Awaiting-Trial Inmates in Correctional Centers in North- West, Nigeria
Authors: Muhammad Shafi'u Adamu
Abstract:
This study examined the effects of two Group Cognitive Behavioural Therapies (Cognitive Restructuring and Rational Emotive Behavioural Therapy) on Psychological Distress of awaiting-trial Inmates in Correctional Centres in North-West, Nigeria. The study had four specific objectives, four research questions, and four null hypotheses. The study used a quasi-experimental design that involved pre-test and post-test. The population comprised of all 7,962 awaiting-trial inmates in correctional centres in North-west, Nigeria. 131 awaiting trial inmates from three intact Correctional Centres were randomly selected using the census technique. The respondents were sampled and randomly put into 3 groups (CR, REBT and Control). Kessler Psychological Distress Scale (K10) was adapted for data collection in the study. The instrument was validated by experts and subjected to pilot study using Cronbach's Alpha with reliability co-efficient of 0.772. Each group received treatment for 8 consecutive weeks (60 minutes/week). Data collected from the field were subjected to descriptive statistics of mean, standard deviation and mean difference to answer the research questions. Inferential statistics of ANOVA and independent sample t-test were used to test the null hypotheses at P≤ 0.05 level of significance. Results in the study revealed that there was no significant difference among the pre-treatment mean scores of experimental and control groups. Statistical evidence also showed a significant difference among the mean sores of the three groups, and thus, results of the Post Hoc multiple-comparison test indicating the posttreatment reduction of psychological distress on the awaiting-trial inmates. Documented output also showed a significant difference between the post-treatment psychologically distressed mean scores of male and female awaiting-trial inmates, but there was no difference on those exposed to REBT. The research recommends that a standardized structured CBT counselling technique treatment should be designed for correctional centres across Nigeria, and CBT counselling techniques could be used in the treatment of PD in both correctional and clinical settings.Keywords: awaiting-trial inmates, cognitive restructuring, correctional centres, group cognitive behavioural therapies, rational emotive behavioural therapy
Procedia PDF Downloads 88852 An Improvement of ComiR Algorithm for MicroRNA Target Prediction by Exploiting Coding Region Sequences of mRNAs
Authors: Giorgio Bertolazzi, Panayiotis Benos, Michele Tumminello, Claudia Coronnello
Abstract:
MicroRNAs are small non-coding RNAs that post-transcriptionally regulate the expression levels of messenger RNAs. MicroRNA regulation activity depends on the recognition of binding sites located on mRNA molecules. ComiR (Combinatorial miRNA targeting) is a user friendly web tool realized to predict the targets of a set of microRNAs, starting from their expression profile. ComiR incorporates miRNA expression in a thermodynamic binding model, and it associates each gene with the probability of being a target of a set of miRNAs. ComiR algorithms were trained with the information regarding binding sites in the 3’UTR region, by using a reliable dataset containing the targets of endogenously expressed microRNA in D. melanogaster S2 cells. This dataset was obtained by comparing the results from two different experimental approaches, i.e., inhibition, and immunoprecipitation of the AGO1 protein; this protein is a component of the microRNA induced silencing complex. In this work, we tested whether including coding region binding sites in the ComiR algorithm improves the performance of the tool in predicting microRNA targets. We focused the analysis on the D. melanogaster species and updated the ComiR underlying database with the currently available releases of mRNA and microRNA sequences. As a result, we find that the ComiR algorithm trained with the information related to the coding regions is more efficient in predicting the microRNA targets, with respect to the algorithm trained with 3’utr information. On the other hand, we show that 3’utr based predictions can be seen as complementary to the coding region based predictions, which suggests that both predictions, from 3'UTR and coding regions, should be considered in a comprehensive analysis. Furthermore, we observed that the lists of targets obtained by analyzing data from one experimental approach only, that is, inhibition or immunoprecipitation of AGO1, are not reliable enough to test the performance of our microRNA target prediction algorithm. Further analysis will be conducted to investigate the effectiveness of the tool with data from other species, provided that validated datasets, as obtained from the comparison of RISC proteins inhibition and immunoprecipitation experiments, will be available for the same samples. Finally, we propose to upgrade the existing ComiR web-tool by including the coding region based trained model, available together with the 3’UTR based one.Keywords: AGO1, coding region, Drosophila melanogaster, microRNA target prediction
Procedia PDF Downloads 451851 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study
Authors: D. M. Samartsev, A. G. Copping
Abstract:
As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.Keywords: analysis, architecture, automation, design process, technology
Procedia PDF Downloads 104850 Using Optimal Cultivation Strategies for Enhanced Biomass and Lipid Production of an Indigenous Thraustochytrium sp. BM2
Authors: Hsin-Yueh Chang, Pin-Chen Liao, Jo-Shu Chang, Chun-Yen Chen
Abstract:
Biofuel has drawn much attention as a potential substitute to fossil fuels. However, biodiesel from waste oil, oil crops or other oil sources can only satisfy partial existing demands for transportation. Due to the feature of being clean, green and viable for mass production, using microalgae as a feedstock for biodiesel is regarded as a possible solution for a low-carbon and sustainable society. In particular, Thraustochytrium sp. BM2, an indigenous heterotrophic microalga, possesses the potential for metabolizing glycerol to produce lipids. Hence, it is being considered as a promising microalgae-based oil source for biodiesel production and other applications. This study was to optimize the culture pH, scale up, assess the feasibility of producing microalgal lipid from crude glycerol and apply operation strategies following optimal results from shake flask system in a 5L stirred-tank fermenter for further enhancing lipid productivities. Cultivation of Thraustochytrium sp. BM2 without pH control resulted in the highest lipid production of 3944 mg/L and biomass production of 4.85 g/L. Next, when initial glycerol and corn steep liquor (CSL) concentration increased five times (50 g and 62.5 g, respectively), the overall lipid productivity could reach 124 mg/L/h. However, when using crude glycerol as a sole carbon source, direct addition of crude glycerol could inhibit culture growth. Therefore, acid and metal salt pretreatment methods were utilized to purify the crude glycerol. Crude glycerol pretreated with acid and CaCl₂ had the greatest overall lipid productivity 131 mg/L/h when used as a carbon source and proved to be a better substitute for pure glycerol as carbon source in Thraustochytrium sp. BM2 cultivation medium. Engineering operation strategies such as fed-batch and semi-batch operation were applied in the cultivation of Thraustochytrium sp. BM2 for the improvement of lipid production. In cultivation of fed-batch operation strategy, harvested biomass 132.60 g and lipid 69.15 g were obtained. Also, lipid yield 0.20 g/g glycerol was same as in batch cultivation, although with poor overall lipid productivity 107 mg/L/h. In cultivation of semi-batch operation strategy, overall lipid productivity could reach 158 mg/L/h due to the shorter cultivation time. Harvested biomass and lipid achieved 232.62 g and 126.61 g respectively. Lipid yield was improved from 0.20 to 0.24 g/g glycerol. Besides, product costs of three kinds of operation strategies were also calculated. The lowest product cost 12.42 $NTD/g lipid was obtained while employing semi-batch operation strategy and reduced 33% in comparison with batch operation strategy.Keywords: heterotrophic microalga Thrasutochytrium sp. BM2, microalgal lipid, crude glycerol, fermentation strategy, biodiesel
Procedia PDF Downloads 148849 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes
Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi
Abstract:
Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation
Procedia PDF Downloads 292